Perl script writing output twice; second time in reverse order - perl

I'm trying to figure out why a script is writing output twice. The first time in proper order, the second time in reverse order. It should only be writing it once.
#!/usr/bin/perl
use warnings;
use strict;
use Fcntl ':mode';
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
my $dir = "/var/log/tivoli/";
my $mtab = "/etc/mtab";
my $permFile = "world_writable_files.txt";
my $tmpFile = "world_writable_files.tmp";
my $exclude = "/usr/local/etc/world_writable_excludes.txt";
my $mask = S_IWUSR | S_IWGRP | S_IWOTH;
my (%excludes, %devNums);
my $errHeader;
# Compile a list of mountpoints that need to be scanned
my #mounts;
open MT, "<${mtab}" or die "Cannot open ${mtab}, $!";
# We only want the local mountpoints
while (<MT>) {
if ($_ =~ /ext[34]/) {
chomp;
my #line = split;
push(#mounts, $line[1]);
my #stats = stat($line[1]);
$devNums{$stats[0]} = undef;
}
}
close MT;
# Build a hash from /usr/local/etc/world_writables_excludes.txt
if ((! -e $exclude) || (-z $exclude)) {
$errHeader = <<HEADER;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! !!
!! /usr/local/etc/world_writable_excludes.txt is !!
!! is missing or empty. This report includes !!
!! every world-writable file including those which !!
!! are expected and should be excluded. !!
!! !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
HEADER
} else {
open XCLD, "<${exclude}" or die "Cannot open ${exclude}, $!\n";
while (<XCLD>) {
chomp;
$excludes{$_} = 1;
}
}
sub wanted {
my #dirStats = stat($File::Find::name);
# Is it excluded from the report...
return if exists $excludes{$File::Find::name};
# ...is the Tivoli installation directory or a special directory...
if ($File::Find::name =~ m{^/sys|^/proc|^/dev|^/opt/IBM/ITM}) {
$File::Find::prune = 1;
return;
}
# ...a regular file, ...
return unless -f;
# ...local, ...
return unless (exists $devNums{$dirStats[0]});
# ...and world writable?
return unless ($dirStats[2] & $mask) == $mask;
# If so, add the file to the list of world writable files
print(WWFILE "$File::Find::name\n");
}
# Create the output file path if it doesn't already exist.
mkdir($dir or die "Cannot execute mkdir on ${dir}, $!") unless (-d $dir);
# Create our filehandle for writing our findings
open WWFILE, ">${dir}${tmpFile}" or die "Cannot open ${dir}${tmpFile}, $!";
print(WWFILE "${errHeader}") if ($errHeader);
find(\&wanted, #mounts);
close WWFILE;
# If no world-writable files have been found ${tmpFile} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${dir}${tmpFile}") {
unlink "${dir}${tmpFile}";
} else {
rename("${dir}${tmpFile}","${dir}${permFile}") or die "Cannot rename file ${dir}${tmpFile}, $!";
}
Example output:
# cat world_writable_files.txt
/var/opt/ds_agent/am/diagnostic_1.log
/home/User1/report.pl.20130220
/home/User1/report.pl.20130220
/var/opt/ds_agent/am/diagnostic_1.log
Each file is being written only once in the script so I am wondering if the filesystem is being scanned twice. Once in each direction. I don't see where that would be happening, but I don't know.
Excludes file:
# cat /usr/local/etc/world_writable_excludes.txt
/var/opt/ds_agent/diagnostic.log
/var/opt/ds_agent/am/diagnostic.log
Any thoughts on this conundrum?

It's because #mounts has / in addition to /home and /var. So you're asking it to scan everything at and below / (including /home and /var), then to scan everything at and below /home, then to scan everything at and below /var.
It's better to identify the places you want to avoid.
while (<MT>) {
my #fields = split;
if ($field[2] !~ /^ext[34]\z/) {
++$excludes{ $fields[1] };
}
}
find(\&wanted, '/');
sub wanted {
if ($excludes{$File::Find::name}) {
$File::Find::prune = 1;
return;
}
my #dirStats = stat($File::Find::name);
return if !-f;
return if $dirStats[2] & S_IWOTH;
print(WWFILE "$File::Find::name\n");
}
You won't need if ($File::Find::name =~ m{^/sys|^/proc|^/dev|^/opt/IBM/ITM}) anymore because they're not ext3 or ext4. Except maybe for /opt/IBM/ITM (since I don't know what that is). If you did have some files or directories you wanted to skip, add them to %excludes rather than making a relatively expensive regex match.
++$excludes{$_} for qw( /foo /bar /opt/IBM/ITM );

Related

How can I Parser file without giving file name in Perl?

When I run the programme at that time I only give a directory name I want all files in the directory to be parsed? Here is my code
my #indexFiles= "www/I.html";
my #rdata = readFile("#indexFiles");
sub readFile{
my $somefile = $_[0];
my #links = ($somefile);
my $p = HTML::TokeParser->new($somefile) || die "Can't open: $!";
while (my $token = $p->get_tag("img","a")){
my $currentlink = $token->[1]{href} || $token->[1]{src};
my $finalLink= $directory."/".$currentlink ;
if($currentlink =~ /\.html$/){
my #data = readFile($finalLink);
push #links,#data;
} else{
push #links,$finalLink;
}
}
return #links;
}
In www folder I have 3 HTML file, 2 folder.
my #indexFiles= "www/I.html" In this line I pass specific path name and file. I don't want to pass that name. Instead of this, it will select automatically.
For example: When I run my programme perl c.pl www. It should be Parse all the file.
I give the specific file name I.html then after it will be going to find img and a tag.
Your question doesn't appear to have anything to do with parsing or HTML::TokeParser; it appears to be about determining whether a path references a directory or not, and getting the list of files in the directory if it references a directory.
stat and -d (in conjunction or independently) can be used to test if a path references a directory.
At the lowest level, opendir+readdir+closedir is used to read a directory. The glob builtin and numerous modules provide alternatives ways of doing this.
Recursive search:
sub process {
for my $qfn (#_) {
stat($qfn)
or die("Can't stat \"$qfn\": $!\n");
if (-d _) {
process(glob("\Q$qfn\E/*"));
}
elsif ($qfn =~ /\.html\z/) {
process_html_file($qfn);
}
}
}
process('www');
Non-recursive search:
sub process {
for my $qfn (#_) {
stat($qfn)
or die("Can't stat \"$qfn\": $!\n");
if (-d _) {
process_html_file(glob("\Q$qfn\E/*.html"));
}
elsif ($qfn =~ /\.html\z/) {
process_html_file($qfn);
}
}
}
process('www');
Alternatively, you could use File::Find::Rule.
Recursive search:
use File::Find::Rule qw( );
process_html_file($_)
for File::Find::Rule->name('*.html')->file->in('www');
Non-recursive search:
use File::Find::Rule qw( );
process_html_file($_)
for File::Find::Rule->maxdepth(1)->name('*.html')->file->in('www');

Ignore an entire directory when using File::Find in Perl script

I have a script which scans every local filesystem for world-writable files. Any found files are written to an output file. It also uses another file which provides a list of files to ignore.
We have the Tivoli monitoring agent installed which, for some strange reason, has been designed to create every file in its installation path with world-writable permissions. As it is known and there is little we can do about it, we would like to simply ignore the entire directory.
I imagine I can utilize a glob such as /opt/IBM/ITM/* but I haven't the first bit of a clue at to how to do that.
At the moment I've hard-coded the directory into the script. This is less than ideal, but functional. I'd prefer to have it in the list of excludes.
Over at Code Review it was suggested that I use File::Find::prune. Unfortunately, this hasn't worked. From what I gather and understand about File::Find::prune if it finds a file at /opt/IBM/ITM/.../.../file.txt which is supposed to be excluded, it will then skip the entire /opt/IBM/ITM/.../.../ directory. This is fine, but it means I would need to have an exclusion entry for every sub-directoy of /opt/IBM/ITM/. This would be a tedious endeavor considering how many sub-directories and sub-sub-directories there are.
I did try placing a world-writable file under /opt/IBM/ITM/ and add that to the exclusion list, but it didn't work. I'm guessing because it wasn't found first.
The script:
#!/usr/bin/perl
use warnings;
use strict;
use Fcntl ':mode';
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
my $dir = "/var/log/tivoli/";
my $mtab = "/etc/mtab";
my $permFile = "world_writable_files.txt";
my $tmpFile = "world_writable_files.tmp";
my $exclude = "/usr/local/etc/world_writable_excludes.txt";
my $mask = S_IWUSR | S_IWGRP | S_IWOTH;
my (%excludes, %devNums);
my $errHeader;
# Compile a list of mountpoints that need to be scanned
my #mounts;
open MT, "<${mtab}" or die "Cannot open ${mtab}, $!";
# We only want the local mountpoints
while (<MT>) {
if ($_ =~ /ext[34]/) {
chomp;
my #line = split;
push(#mounts, $line[1]);
my #stats = stat($line[1]);
$devNums{$stats[0]} = undef;
}
}
close MT;
# Build a hash from /usr/local/etc/world_writables_excludes.txt
if ((! -e $exclude) || (-z $exclude)) {
$errHeader = <<HEADER;
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! !!
!! /usr/local/etc/world_writable_excludes.txt is !!
!! is missing or empty. This report includes !!
!! every world-writable file including those which !!
!! are expected and should be excluded. !!
!! !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
HEADER
} else {
open XCLD, "<${exclude}" or die "Cannot open ${exclude}, $!\n";
while (<XCLD>) {
chomp;
$excludes{$_} = 1;
}
}
sub wanted {
my #dirStats = stat($File::Find::name);
# Is it excluded from the report...
return if exists $excludes{$File::Find::name};
# ...is the Tivoli installation directory...
return if ($File::Find::name =~ /\b\/ITM\b/);
# ...in a special directory, ...
return if ($File::Find::name =~ /^\bsys\b|\bproc\b|\bdev\b$/);
# ...a regular file, ...
return unless -f;
# ...local, ...
return unless (exists $devNums{$dirStats[0]});
# ...and world writable?
return unless ($dirStats[2] & $mask) == $mask;
# If so, add the file to the list of world writable files
print(WWFILE "$File::Find::name\n");
}
# Create the output file path if it doesn't already exist.
mkdir($dir or die "Cannot execute mkdir on ${dir}, $!") unless (-d $dir);
# Create our filehandle for writing our findings
open WWFILE, ">${dir}${tmpFile}" or die "Cannot open ${dir}${tmpFile}, $!";
print(WWFILE "${errHeader}") if ($errHeader);
finddepth(\&wanted, #mounts);
close WWFILE;
# If no world-writable files have been found ${tmpFile} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${dir}${tmpFile}") {
unlink "${dir}${tmpFile}";
} else {
rename("${dir}${tmpFile}","${dir}${permFile}") or die "Cannot rename file ${dir}${tmpFile}, $!";
}
It has also been suggested elsewhere that I use File::Find::Rule. I'd rather avoid doing this simply because I don't want to perform a complete rewrite of the script.
As I've said, the script above works. I'd prefer not hard-coding the exclusion, though. Figuring out how to do this would also allow me to remove the match against the "special" directories.
To prune an entire directory tree, just set the $File::Find::prune value in your wanted sub. This will work as long as bydepth was not specified:
if ($File::Find::name eq '/opt/IBM/ITM') {
$File::Find::prune = 1;
return;
}

Perl sftp downloads with Net::SFTP::Foreign

Im a beginner. I have written a perl script which does the following
-Create a directory under “/x01/abcd/abc_logs/abcd_Logs” by the current date, in the format of “YYYYMMDD” if it has not already been created.
i.e: if the script is run on “01st of jan 2013”, the directory “20130101” will be created under the said path. So whenever there is a need to inspect the logs always look for a directory by the current date.
-Check if the log file(s) have already been downloaded earlier within the same day, and if not log(s) will be downloaded to the TODAY’s directory.
Im having a hard time, coming up with a solution to print a message when there are no files in the share. This is of course when the user specify 2 or more files that are not there in the share. I know that this happens because there is a "die" statement in the "sub get_LOGS". I just cannot seem to understand how to return a message when all the files I specify do not happen to be in the share.
usage of this script is as follows
./abc_logs ....<file(n)>
following is the script.
my $LOGS_LOCAL_PATH = "/x02/abc/abcba2/";
chomp $LOGS_LOCAL_PATH;
my $LOGS_REM_PATH = "/x01/INT/abc/vabc2/";
chomp $LOGS_REM_PATH;
my $TODAY = `date +%Y%m%d`;
chomp $TODAY;
my #GETLOOP = #ARGV;
unless ($#ARGV >= 0) {
print "\nUsage: gtp_logs.pl <file1> <file2> <file3>.....<file(n)>\n\n";
exit;
}
system("clear");
unless ( -d "$LOGS_LOCAL_PATH"."$TODAY") {
print "Directory \"$TODAY\" doesn't exist. So creating the directory..!\n";
print "OK..Done.....!\n\n";
system("mkdir $LOGS_LOCAL_PATH/$TODAY");
}
else {
print "Directory already exists. Logs will be downloaded to ==> \"$LOGS_LOCAL_PATH$TODAY\".....!\n\n";
}
# if_DOWNLOADED($LOGS_LOCAL_PATH,$TODAY,#GETLOOP);
chdir("$LOGS_LOCAL_PATH"."$TODAY") || die "cannot cd to ($!)";
foreach my $GETL (#GETLOOP) {
my $is_downloaded = if_DOWNLOADED($LOGS_LOCAL_PATH,$TODAY,$GETL);
if(!$is_downloaded)
{
get_LOGS("172.25.70.221","abc","abc2","/x01/INT/abc",$GETL);
print "File \"$GETL\" downloaded to ==> \"$LOGS_LOCAL_PATH$TODAY\"\n\n";
}
else
{
print "File \"$GETL\" has already been Downloaded to ==> \"$LOGS_LOCAL_PATH$TODAY\"\n\n";
}
}
sub get_LOGS {
my $LOG_HOST = shift;
my $REM_USER = shift;
my $REM_PASSW = shift;
my $REM_PATH = shift;
my $REM_FILE = shift;
print "Connecting to the sftp share! Please wait....!\n";
my $sftp = Net::SFTP::Foreign->new($LOG_HOST, user => $REM_USER, password => $REM_PASSW);
$sftp->setcwd($REM_PATH) or die "unable to change cwd: " . $sftp->error;
print "OK. On the share! Downloading the file \"$REM_FILE\"...................!\n\n\n\n";
$sftp->error and die "Problem connecting to the share...!!!! " . $sftp->error;
$sftp->get($REM_FILE) or die "File does not seem to be present on the remote share. Please re-request..!!!" . $sftp->error;
return $REM_FILE;
}
sub if_DOWNLOADED {
my $DWD_FILE_PATH = shift;
my $DWD_DIR = shift;
my $DWD_FILE = shift;
if (-e "$DWD_FILE_PATH/$DWD_DIR/$DWD_FILE")
{
return 1;
}
else
{
return 0;
}
}
Please can someone help me finding a solution to this matter? Please try to use the same script and modify.
/V
Some comments to your code:
Use strict and warnings in order to catch lots of errors early.
Read some book on style (i.e. Damian Conway's Perl Best Practices). But in any case try to be consistent when naming variables, subroutines, and everything and also with their case.
When you have to use some calculated value in several places, try to calculate it once and save it in a variable.
Don't use subroutines for trivial things.
You don't need to call chomp on variables you have defined and that don't have a "\n" character at the end.
Opening a new SFTP connection for every file transfer is very inefficient. You can open just one at the beginning and use it for all the transfers.
And now, a simplified version of your script:
#!/usr/bin/perl
use strict;
use warnings;
my $host = "172.25.70.221";
my $user = "abc";
my $password = "abc1234321";
my $LOGS_LOCAL_PATH = "/x02/ABC/abc2";
my $LOGS_REM_PATH = "/x01/INT/abc/vim";
my $TODAY = `date +%Y%m%d`;
chomp $TODAY;
my $TODAY_LOCAL_PATH = "$LOGS_LOCAL_PATH/$TODAY";
my #files = #ARGV;
#files or die "\nUsage: gtp_logs.pl <file1> <file2> <file3>.....<file(n)>\n\n";
system("clear");
if ( -d $TODAY_LOCAL_PATH) {
print "Directory already exists. Logs will be downloaded to ==> \"$TODAY_LOCAL_PATH\".....!\n\n";
}
else {
print "Directory \"$TODAY\" doesn't exist. So creating the directory..!\n";
mkdir "$TODAY_LOCAL_PATH" or die "unable to create directory: $!\n";
print "OK..Done.....!\n\n";
}
chdir $TODAY_LOCAL_PATH or die "cannot cd to ($!)\n";
my $sftp = Net::SFTP::Foreign->new($host, user => $user, password => $password);
$sftp->error
and die "Problem connecting to the share...!!!! " . $sftp->error;
my $ok = 0;
my $failed = 0;
foreach my $file (#files) {
if (-e "$TODAY_LOCAL_PATH/$file") {
print "File \"$file\" has already been Downloaded to ==> \"$TODAY_LOCAL_PATH\"\n";
}
else {
if ($sftp->get("$LOGS_REM_PATH/$file")) {
print "File \"$file\" downloaded to ==> \"$TODAY_LOCAL_PATH\"\n";
$ok++;
}
else {
print "Unable to download file \"$file\" : " . $sftp->error . "\n";
$failed++;
}
}
}
print "$ok files have been downloaded, $failed files failed!\n\n";

What's the best strategy to delete a very huge folder using Perl?

I need to delete all content (files and folders) under a given folder. The problems is the folder has millions of files and folders inside it. So I don't want to load all the file names in one go.
Logic should be like this:
iterate a folder without load everything
get a file or folder
delete it
(verbose that the file or folder "X" was deleted)
go to the next one
I'm trying something like this:
sub main(){
my ($rc, $help, $debug, $root) = ();
$rc = GetOptions ( "HELP" => \$help,
"DEBUG" => \$debug,
"ROOT=s" => \$root);
die "Bad command line options\n$usage\n" unless ($rc);
if ($help) { print $usage; exit (0); }
if ($debug) {
warn "\nProceeding to execution with following parameters: \n";
warn "===============================================================\n";
warn "ROOT = $root\n";
} # write debug information to STDERR
print "\n Starting to delete...\n";
die "usage: $0 dir ..\n" unless $root;
*name = *File::Find::name;
find \&verbose, #ARGV;
}
sub verbose {
if (!-l && -d _) {
print "rmdir $name\n";
} else {
print "unlink $name\n";
}
}
main();
It's working fine, but whenever "find" reads the huge folder, the application gets stuck and I can see the system memory for Perl increasing until timeout. Why? Is it trying to load all the files in one go?
Thanks for your help.
The remove_tree function from File::Path can portably and verbosely remove a directory hierarchy, keeping the top directory, if desired.
use strict;
use warnings;
use File::Path qw(remove_tree);
my $dir = '/tmp/dir';
remove_tree($dir, {verbose => 1, keep_root => 1});
Pre-5.10, use the rmtree function from File::Path. If you still want the top directory, you could just mkdir it again.
use File::Path;
my $dir = '/tmp/dir';
rmtree($dir, 1); # 1 means verbose
mkdir $dir;
The perlfaq points out that File::Find does the hard work of traversing a directory, but the work isn't that hard (assuming your directory tree is free of named pipes, block devices, etc.):
sub traverse_directory {
my $dir = shift;
opendir my $dh, $dir;
while (my $file = readdir($dh)) {
next if $file eq "." || $file eq "..";
if (-d "$dir/$file") {
&traverse_directory("$dir/$file");
} elsif (-f "$dir/$file") {
# $dir/$file is a regular file
# Do something with it, for example:
print "Removing $dir/$file\n";
unlink "$dir/$file" or warn "unlink $dir/$file failed: $!\n";
} else {
warn "$dir/$file is not a directory or regular file. Ignoring ...\n";
}
}
closedir $dh;
# $dir might be empty at this point. If you want to delete it:
if (rmdir $dir) {
print "Removed $dir/\n";
} else {
warn "rmdir $dir failed: $!\n";
}
}
Substitute your own code for doing something with a file or (possibly) empty directory, and call this function once on the root of the tree that you want to process. Lookup the meanings of opendir/closedir, readdir, -d, and -f if you haven't encountered them before.
What's wrong with:
`rm -rf $folder`; // ??
You can use File::Find to systematically traverse the directory and delete the files and directories under it.
OK, I gave in and used Perl builtins but you should use File::Path::rmtree which I had totally forgotten about:
#!/usr/bin/perl
use strict; use warnings;
use Cwd;
use File::Find;
my ($clean) = #ARGV;
die "specify directory to clean\n" unless defined $clean;
my $current_dir = getcwd;
chdir $clean
or die "Cannot chdir to '$clean': $!\n";
finddepth(\&wanted => '.');
chdir $current_dir
or die "Cannot chdir back to '$current_dir':$!\n";
sub wanted {
return if /^[.][.]?\z/;
warn "$File::Find::name\n";
if ( -f ) {
unlink or die "Cannot delete '$File::Find::name': $!\n";
}
elsif ( -d _ ) {
rmdir or die "Cannot remove directory '$File::Find::name': $!\n";
}
return;
}
Download the unix tools for windows and then you can do rm -rv or whatever.
Perl is a great tool for a lot of purposes, but this one seems better done by a specialised tool.
Here's a cheap "cross-platform" method:
use Carp qw<carp croak>;
use English qw<$OS_NAME>;
use File::Spec;
my %deltree_op = ( nix => 'rm -rf %s', win => 'rmdir /S %s' );
my %group_for
= ( ( map { $_ => 'nix' } qw<linux UNIX SunOS> )
, ( map { $_ => 'win' } qw<MSWin32 WinNT> )
);
my $group_name = $group_for{$OS_NAME};
sub chop_tree {
my $full_path = shift;
carp( "No directory $full_path exists! We're done." ) unless -e $full_path;
croak( "No implementation for $OS_NAME!" ) unless $group_name;
my $format = $deltree_op{$group_name};
croak( "Could not find command format for group $group_name" ) unless $format;
my $command = sprintf( $format, File::Spec->canonpath( $full_path ));
qx{$command};
}

What's the best way to tell if a file exists in a directory?

I'm trying to move a file but I want to ensure that it exists before I do so. What's the simplest way to do this in Perl?
My code is like this. I looked up the open command, but I am not sure it is the simplest way or not.
if #Parser.exe exist in directory of Debug
{
move ("bin/Debug/Parser.exe","Parser.exe");
}
elsif #Parser.exe exist in directory of Release
{
move ("bin/Release/Parser.exe","Parser.exe");
}
else
{
die "Can't find the Parser.exe.";
}
Thank you.
What you need is a file test operator to check if the file exists. Specifically, you need the -e operator which checks if a file exists.
if (-e "bin/Debug/Parser.exe")
{
move ("bin/Debug/Parser.exe","Parser.exe");
}
elsif (-e "bin/Release/Parser.exe")
move ("bin/Release/Parser.exe","Parser.exe");
else
{
die "Can't find the Parser.exe."
}
You can make use of -e file test to check for file existence:
use File::Copy;
if(-e "bin/Debug/parser.exe") {
copy("bin/Debug/parser.exe","Parser.exe") or die "Copy failed: $!";
} elsif(-e "bin/Release/Parser.exe") {
copy("bin/Release/parser.exe","Parser.exe") or die "Copy failed: $!";
} else {
die "Can't find the Parser.exe.";
}
Personally I don't like the duplication of the file/ path name in these solutions - speaking for myself I suspect I might change accidently it to
if(-e "pathone....")... { copy("pathtwo...","Parser.exe")
I would do something like
copy("bin/Debug/parser.exe","Parser.exe") or
copy("bin/Release/parser.exe","Parser.exe") or
die "Can't find the Parser.exe.";
Or if that is a bit risque
copy_parser("bin/Debug") or
copy_parser("bin/Release") or
die "Can't find the Parser.exe.";
sub copy_parser {
my $path = shift ;
my $source = File::Spec-> catfile ( $path, 'Parser.exe' ) ;
if ( -e $source ) {
copy( $source, "Parser.exe") or die "Copy or $source failed: $!";
return 1 ;
}
return 0 ;
}
justintime is on the right track when he notes the repetition and seeks to eliminate it. I took the minimization a step farther than he did.
Rather than encapsulate only the copy/move portion of the code, though, it makes sense to remove as all the repetition by encapsulating the list iteration.
I put the subroutine in a module so it can be reused later as needed. This also reduces repeated code.
use SearchMove;
my $found = search_and_move(
src => 'Parser.exe',
dest => 'Parser.exe',
dirs => [
"bin/Debug",
"bin/Release",
],
);
die "Can't find the Parser.exe\n"
unless defined $found;
print "Found Parser.exe in $found";
In SearchMove.pm
package SearchMove;
use strict;
use warnings;
use Exporter 'import';
our #EXPORT_OK = qw( search_and_move );
our #EXPORT = #EXPORT_OK;
sub search_and_move {
my %arg = #_;
croak "No source file" unless exists $args{src};
croak "No dest file" unless exists $args{dest};
croak "No search paths" unless exists $args{dirs};
my $got_file;
for my $dir ( #{$arg{dirs}} ) {
my $source = "$dir/$arg{src}";
if( -e $source ) {
move( $source, $arg{dest} );
$got_file = $dir;
last;
}
}
return $got_file;
}
1;
Now you can use search_and_move in many different projects.