perl overload file name download - perl

I need to be able to propose files to be downloaded but i have to read and print the file in my CGI. I tried to go for :
#!/usr/bin/perl -w
use strict;
push( #INC, $lib_directory );
require 'lib_utils.pl';
dl_file('/tmp/final.pdf');
as main page (dl.pl) and
sub dl_file {
my ($file) = #_;
if ( ! -e $file) {
print "file does not exist";
return 0;
}
my $content = read_file( $file, binmode => ':utf8' ) ;
$file =~ m#(.*)([^/]*)$#;
my $directory = $1;
my $filename = $2;
chdir $directory;
my $form = new CGI;
print $form->header(
-type => 'application/octet-stream',
-attachment => $filename,
-filename => $filename,
-Content-Disposition => "attachment; filename=$filename",
);
$form->print($content);
return 1;
}
for the called function. Funny thing is, this code workes just fine if i dont go for a sub and have all the code in dl.pl BUT as soon as i move the code in a sub, the downloaded file is called after the script (ie dl.pl)
How would you change it or how would you do ?
Thanks in advance for your help

Your line
$file =~ m#(.*)([^/]*)$#
will leave $1 containing the whole of $file and $2 empty. You need a slash in there somewhere, probably like this
$file =~ m#(.*)/([^/]*)$#
It would also make sense to make the directory optional, like so
$file =~ m#(?:(.*)/)?([^/]*)$#
my $directory = $1;
and you would have to write
chdir $directory if $directory

This is what's tripping you up:
$file =~ m#(.*)([^/]*)$#;
Looks like you're trying to split "/tmp/final.pdf" into directory and file. But you don't - that pattern splits you into:
print "F:",$filename,"\n";
print "D:",$directory,"\n";
this output:
F:
D:/tmp/final.pdf
This is why you have the problem - you don't have a filename, so it defaults to using the script name.
I would suggest instead you want:
my ( $directory, $filename ) = ( $file =~ m,(.*/)([\.\w]+)$, );
This gives:
F:final.pdf
D:/tmp/

As has been said, you're suffering from the greedy matching of .* which will eat up the entire string:
$file =~ m{(.*)([^/]*)$};
There are three easy solutions to this
1. Boundary Conditions
As has been stated, you can add a boundary condition that limits how much .* can match:
$file =~ m{(?:(.*)/)?([^/]*)$};
my $dir = $1 // '';
my $filename = $2;
Or this somewhat convoluted lookbehind assertion can also enforce a boundary:
$file =~ m{(.*)(?<![^/])([^/]*)$};
my $dir = $1;
my $filename = $2;
2. Non-greedy matching
However, the simplest regex solution is to use non-greedy matching .*?:
$file =~ m{(.*?)([^/]*)$};
my ($dir, $filename) = ($1, $2);
Basically, anytime you're about to put .* anywhere, check your assumptions. The majority of the time you'll actually want .*? instead.
3. Module for parsing file paths
The bonus option is just to use a module like File::Spec parsing file path information
use File::Spec;
my ($vol, $dirs, $filename) = File::Spec->splitpath( $file );

Related

readdir() attempted on invalid dirhandle $par_dir

I am trying just to execute a perl script inside multiple folders, but I don't understand why I have a problem with readdir() attempted on invalid dirhandle $par_dir. $parent is printed good but $par_dir is printed like "GLOB(0x17e7a68)".
Any idea of why it is happening? Thanks a lot!
Here the code:
#!/usr/bin/perl
use warnings;
use Cwd;
use FileHandle;
use File::Glob;
my $parent = "/media/sequentia/NAS/projects/131-prgdb3/01- DATA/All_plant_genomes_proteomes";
my ($par_dir, $sub_dir);
opendir($par_dir, $parent);
print $parent."\n";
print $par_dir."\n";
while (my $sub_folders = readdir($par_dir)) {
next if ($sub_folders =~ /^..?$/); # skip . and ..
my $path = $parent . '/' . $sub_folders;
next unless (-d $path); # skip anything that isn't a directory
print $path."\n";
chdir($path) or die;
#files = glob( $path. '/*' );
foreach $filename (#files){
print $filename ."\n";
system ("grep 'comment' PutativeGenes.txt | wc -l");
system ("grep 'class' PutativeGenes.txt | wc -l");
}
}
closedir($par_dir);
The problem is probably that the directory you specify in $parent doesn't exist. You must always check to make sure that a call to open or opendir succeeded before going on to use the handle
That path step 01- DATA is suspicious. I would expect 01-DATA or perhaps 01- DATA with a single space, but multiple spaces are rarely used because they are invisible and difficult to count
Here are some other thoughts on your program
You must always use strict and use warnings 'all' at the top of every Perl program you write. That will alert you to many simple errors that you may otherwise overlook
Your statement next if ( $sub_folders =~ /^..?$/ ) is wrong because the dots must be escaped. As it is you are discarding any name that is one or two characters in length
If your path really does contain spaces then you need to use File::Glob ':bsd_glob', as otherwise the spaces will be treated as separators between multipl glob patterns
You execute the foreach loop for every file or directory found in $path, but your system calls aren't affected by the name of that file, so you're making the same call multiple times
It's worth noting that glob will do all the directory searching for you. I would write something like this
#!/usr/bin/perl
use strict;
use warnings 'all';
use File::Glob ':bsd_glob';
my $parent_dir = "/media/sequentia/NAS/projects/131-prgdb3/01-DATA/All_plant_genomes_proteomes";
print "$parent_dir\n";
while ( my $path = glob "$parent_dir/*" ) {
next unless -d $path;
print "$path\n";
chdir $path or die qq{Unable to chdir to "$path": $!};
while ( my $filename = glob "$path/*" ) {
next unless -f $filename;
print "$filename\n";
system "grep 'comment' PutativeGenes.txt | wc -l";
system "grep 'class' PutativeGenes.txt | wc -l";
}
}
Probably opendir() is failing giving the invalid file handle (probably it fails because you try to open a nonexistent $parent directory).
If opendir fails it will return false, and $par_dir is left unchanged as undef. If you attempt to call readdir() on an undefined file handle you will get a runtime warning like:
readdir() attempted on invalid dirhandle at ...
Therefore you should always check the return code from opendir. For example, you can do:
opendir($par_dir, $parent) or die "opendir() failed: $!";
or see more suggestions on what to do in this link Does die have to be used if opening a file fails?
Note that your code could have been simplified using File::Find::Rule, for example:
my #dirs = File::Find::Rule
->directory->maxdepth(1)->mindepth(1)->in( $parent );
for my $dir (#dirs) {
say "$dir";
my #files = File::Find::Rule->file->maxdepth(1)->in( $dir );
say "--> $_" for #files;
}
Alternatively, if you don't need the directory names:
my #files = File::Find::Rule
->file->maxdepth(2)->mindepth(2)->in( $parent );
say for #files;

In Perl, how can filter all log files in a directory, and extract interesting lines?

I'm trying to select only the .log files in my directory and then search in those files for the word "unbound" and print the entire line into a new output file with the same name as the log file (number###.log) but with a .txt extension. This is what I have so far:
#!/usr/bin/perl
use strict;
use warnings;
my $path = $ARGV[0];
my $outpath = $ARGV[1];
my #files;
my $files;
opendir(DIR,$path) or die "$!";
#files = grep { /\.log$/} readdir(DIR);
my #out;
my $out;
opendir(OUT,$outpath) or die "$!";
my $line;
foreach $files (#files) {
open (FILE, "$files");
my #line = <FILE>;
my $regex = Unbound;
open (OUT, ">>$out");
print grep {$line =~ /$regex/ } <>;
}
close OUT;
close FILE;
closedir(DIR);
closedir (OUT);
I'm a beginner, and I don't really know how to create a new text file with the acquired output.
Few things I'd suggest to improve this code:
declare your loop iterators within the loop. foreach my $file ( #files ) {
use 3 arg open: open ( my $input_fh, "<", $filename );
use glob rather than opendir then grep. foreach my $file ( <$path/*.txt> ) {
grep is good for extracting things into arrays. Your grep reads the whole file to print it, which isn't necessary. Doesn't matter much if the file is short though.
perltidy is great for reformatting code.
you're opening 'OUT' to a directory path (I think?) which isn't going to work.
$outpath isn't, it's a file. You need to do something different to output to different files. opendir isn't really valid to an output.
because you're using opendir that's actually giving you filenames - not full paths. So you might be in the wrong place to actually open the files. Prepending the path name, doing a chdir are possible solutions. But that's one of the reasons I like glob because it returns a path as well.
So with that in mind - how about:
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
#Extract paths
my $input_path = $ARGV[0];
my $output_path = $ARGV[1];
#Error if paths are invalid.
unless (defined $input_path
and -d $input_path
and defined $output_path
and -d $output_path )
{
die "Usage: $0 <input_path> <output_path>\n";
}
foreach my $filename (<$input_path/*.log>) {
# extract the 'name' bit of the filename.
# be slightly careful with this - it's based
# on an assumption which isn't always true.
# File::Spec is a more powerful way of accomplishing this.
# but should grab 'number####' from /path/to/file/number####.log
my $output_file = basename ( $filename, '.log' );
#open input and output filehandles.
open( my $input_fh, "<", $filename ) or die $!;
open( my $output_fh, ">", "$output_path/$output_file.txt" ) or die $!;
print "Processing $filename -> $output_path/$output_file.txt\n";
#iterate input, extracting into $line
while ( my $line = <$input_fh> ) {
#check if $line matches your RE.
if ( $line =~ m/Unbound/ ) {
#write it to output.
print {$output_fh} $line;
}
}
#tidy up our filehandles. Although technically, they'll
#close automatically because they leave scope
close($output_fh);
close($input_fh);
}
Here is a script that takes advantage of Path::Tiny. Now, at this stage of your learning process, you are probably better off understanding #Sobrique's solution, but using modules such as Path::Tiny or Path::Class will make it easier to write these one off scripts more quickly, and correctly.
Also, I didn't really test this script, so watch out for bugs.
#!/usr/bin/env perl
use strict;
use warnings;
use Path::Tiny;
run(\#ARGV);
sub run {
my $argv = shift;
unless (#$argv == 2) {
die "Need source and destination paths\n";
}
my $it = path($argv->[0])->realpath->iterator({
recurse => 0,
follow_symlinks => 0,
});
my $outdir = path($argv->[1])->realpath;
while (my $path = $it->()) {
next unless -f $path;
next unless $path =~ /[.]log\z/;
my $logfh = $path->openr;
my $outfile = $outdir->child($path->basename('.log') . '.txt');
my $outfh;
while (my $line = <$logfh>) {
next unless $line =~ /Unbound/;
unless ($outfh) {
$outfh = $outfile->openw;
}
print $outfh $line;
}
close $outfh
or die "Cannot close output '$outfile': $!";
}
}
Notes
realpath will croak if the path provided does not exist.
Similarly for openr and openw.
I am reading input files line-by-line to keep the memory footprint of the program independent of the sizes of input files.
I do not open the output file until I know I have a match to print to.
When matching a file extension using a regular expression pattern, keep in mind that \n is a valid character in Unix file names, and the $ anchor will match it.

Unable to get absolute path of a file from $File::Find::name - perl

I am unable to get the absolute path of a file from $File::Find::name. It is showing undef vale as a output. Not able to figure it out why :( can any one please help me out in this
Error displayed is : Use of uninitialized value $file_name in concatenation
My Code :
use strict;
use warnings;
use File::Find;
use File::Path qw(make_path);
use File::Copy;
use Cwd;
use Data::Printer;
my $rootPATH = $ARGV[0];
my $id = $ARGV[1];
my #Arraypath;
my $file_name;
our $anr_name;
opendir( my $DIR, $rootPATH );
while ( my $entry = readdir $DIR ) {
next unless -d $rootPATH . '/' . $entry;
next if $entry eq '.' or $entry eq '..';
#print "Found directory $entry\n";
push( #Arraypath, ( split( "\n", $entry ) ) );
}
closedir $DIR;
my $count = 0;
foreach my $line (#Arraypath) {
my $fulllogpath = $rootPATH . "\\" . $line;
#print "$fulllogpath\n";
$count++;
start($fulllogpath);
}
sub start {
my $fulllogpath = shift;
our #content;
#print "$fulllogpath\n\n";
find( \&wanted, $fulllogpath );
sub wanted {
push #content, $_;
return;
}
foreach my $file (#content) {
# print "$file\n\n";
if ( $file =~ /traces[_d]*/ ) {
print "$file\n\n";
$file_name = $File::Find::name;
p $file_name;
print "$file_name\n";
}
}
}
Your program is very poorly layed out. It will be much simpler to write and debug code if you indent it properly and use carefully-chosen identifiers: a name like start for a subroutine is useless.
You also have unnecessary subroutine declarations which break up the program flow and make it awkward to follow.
Why do you have a couple of package variables (declared with our)? There is generally no need for them, and it is best to use lexical variables throughout, declared at an appropriate place so that all code has access to them if it needs it.
It is also preferable to use File::Spec to work with file paths, rather than manipulate them using string operators, with which it is easy to make a mistake.
The best way to manage the results of find is to work with absolute paths all the way through. It looks like you want to do more than just print the results returned by find since you load modules like Cwd and File::Copy, but without knowing what that further purpose is I cannot help you to write it.
This code removes all the subroutines and makes everything much more concise.
use strict;
use warnings;
use autodie;
use File::Find 'find';
use File::Spec;
use Data::Printer;
my ($root_path, $id) = #ARGV;
opendir my ($dh), $root_path;
my #dir_list =
grep -d,
map File::Spec->catfile($root_path, $_),
grep { not /\A\.\.?\z/ } readdir $dh;
closedir $dh;
my $count;
for my $dir (#dir_list) {
++$count;
find(sub {
return unless /traces[_d]*/;
my $file = $_;
print "$file\n\n";
my $file_name = $File::Find::name;
p $file_name;
print "$file_name\n";
}, $dir);
}
As has already been stated, $File::Find::name is valid only within the wanted function. Not outside of it.
However, I would recommend making the shift to using Path::Class and Path::Class::Rule for some simpler processing of your files in a cross platform compatible way:
use strict;
use warnings;
use Data::Printer;
use Path::Class;
use Path::Class::Rule;
my ( $root_path, $id ) = #ARGV;
my $dir = dir($root_path);
my $next = Path::Class::Rule->new->file->name(qr{traces[_d]*})->iter(
grep { $_->is_dir() } $dir->children
);
while ( my $file = $next->() ) {
# Accomplishes the same as your script. I suspect these prints statements are mostly for debugging though.
print $file->basename(), "\n\n";
p "$file";
print "$file\n";
}

Odd file handling in perl on OS X

I'm very much a perl newbie, so bear with me.
I was looking for a way to recurse through folders in OS X and came across this solution: How to traverse all the files in a directory...
I modified perreal's answer (see code below) slightly so that I could specify the search folder in an argument; i.e. I changed my #dirs = ("."); to my #dirs = ($ARGV[0]);
But for some reason this wouldn't work -- it would open the folder, but would not identify any of the subdirectories as folders, apart from '.' and '..', so it never actually went beyond the specified root.
If I actively specified the folder (e.g. \Volumes\foo\bar) it still doesn't work. But, if I go back to my #dirs = ("."); and then sit in my desired folder (foo\bar) and call the script from its own folder (foo\boo\script.pl) it works fine.
Is this 'expected' behaviour? What am I missing?!
Many thanks,
Mat
use warnings;
use strict;
my #dirs = (".");
my %seen;
while (my $pwd = shift #dirs) {
opendir(DIR,"$pwd") or die "Cannot open $pwd\n";
my #files = readdir(DIR);
closedir(DIR);
foreach my $file (#files) {
if (-d $file and ($file !~ /^\.\.?$/) and !$seen{$file}) {
$seen{$file} = 1;
push #dirs, "$pwd/$file";
}
next if ($file !~ /\.txt$/i);
my $mtime = (stat("$pwd/$file"))[9];
print "$pwd $file $mtime";
print "\n";
}
}
The problem is that you are using the -d operator on the file basename without its path. Perl will look in the current working directory for a directory of that name and return true if it finds one there, when it should be looking in $pwd.
This solution changes $file to always hold the full name of the file or directory, including the path.
use strict;
use warnings;
my #dirs = (shift);
my %seen;
while (my $pwd = shift #dirs) {
opendir DIR, $pwd or die "Cannot open $pwd\n";
my #files = readdir DIR;
closedir DIR;
foreach (#files) {
next if /^\.\.?$/;
my $file = "$pwd/$_";
next if $seen{$file};
if ( -d $file ) {
$seen{$file} = 1;
push #dirs, $file;
}
elsif ( $file =~ /\.txt$/i ) {
my $mtime = (stat $file)[9];
print "$file $mtime\n";
}
}
}
use full path with -d
-d "$pwd/$file"

How can I check the extension of a file using Perl?

To my perl script, a file is passed as an arguement. The file can be a .txt file or a .zip file containing the .txt file.
I want to write code that looks something like this
if ($file is a zip) {
unzip $file
$file =~ s/zip$/txt/;
}
One way to check the extension is to do a split on . and then match the last result in the array (returned by split).
Is there some better way?
You can use File::Basename for this.
#!/usr/bin/perl
use 5.010;
use strict;
use warnings;
use File::Basename;
my #exts = qw(.txt .zip);
while (my $file = <DATA>) {
chomp $file;
my ($name, $dir, $ext) = fileparse($file, #exts);
given ($ext) {
when ('.txt') {
say "$file is a text file";
}
when ('.zip') {
say "$file is a zip file";
}
default {
say "$file is an unknown file type";
}
}
}
__DATA__
file.txt
file.zip
file.pl
Running this gives:
$ ./files
file.txt is a text file
file.zip is a zip file
file.pl is an unknown file type
Another solution is to make use of File::Type which determines the type of binary file.
use strict;
use warnings;
use File::Type;
my $file = '/path/to/file.ext';
my $ft = File::Type->new();
my $file_type = $ft->mime_type($file);
if ( $file_type eq 'application/octet-stream' ) {
# possibly a text file
}
elsif ( $file_type eq 'application/zip' ) {
# file is a zip archive
}
This way, you do not have to deal with missing/wrong extensions.
How about checking the end of the filename?
if ($file =~ /\.zip$/i) {
and then:
use strict;
use Archive::Extract;
if ($file =~ /\.zip$/i) {
my $ae = Archive::Extract->new(archive => $file);
my $ok = $ae->extract();
my $files = $ae->files();
}
more information here.
You can check the file extension using a regex match as:
if($file =~ /\.zip$/i) {
# $file is a zip file
}
I know this question is several years old, but for anyone that comes here in the future, an easy way to break apart a file path into its constituent path, filename, basename and extension is as follows.
use File::Basename;
my $filepath = '/foo/bar.txt';
my ($basename, $parentdir, $extension) = fileparse($filepath, qr/\.[^.]*$/);
my $filename = $basename . $extension;
You can test it's results with the following.
my #test_paths = (
'/foo/bar/fish.wibble',
'/foo/bar/fish.',
'/foo/bar/fish.asdf.d',
'/foo/bar/fish.wibble.',
'/fish.wibble',
'fish.wibble',
);
foreach my $this_path (#test_paths) {
print "Current path: $this_path\n";
my ($this_basename, $parentdir, $extension) = fileparse($this_path, qr/\.[^.]*$/);
my $this_filename = $this_basename . $extension;
foreach my $var (qw/$parentdir $this_filename $this_basename $extension/) {
print "$var = '" . eval($var) . "'\n";
}
print "\n\n";
}
Hope this helps.
Why rely on file extension? Just try to unzip and use appropriate exception handling:
eval {
# try to unzip the file
};
if ($#) {
# not a zip file
}
Maybe a little bit late but it could be used as an alternative reference:
sub unzip_all {
my $director = shift;
opendir my $DIRH, "$director" or die;
my #files = readdir $DIRH;
foreach my $file (#files){
my $type = `file $director/$file`;
if ($type =~ m/gzip compressed data/){
system "gunzip $director/$file";
}
}
close $DIRH;
return;
}
Here is possible to use linux file executing it from perl by the use of backticks(``). You area able to pass the path of your folder and evaluate if exists a file that is classified by file as gzip compressed.
If you do not mind using a perl module, you can use Module::Generic::File, such as:
use Module::Generic::File qw( file );
my $f = file( '/some/where/file.zip' );
if( $f->extension eq 'zip' )
{
# do something
}
Module::Generic::File has a lot of features to handle and manipulate a file.