Recursive grep in perl - perl

I am new to perl. I have a directory structure. In each directory, I have a log file. I want to grep pattern from that file and do post processing. Right now I am grepping the pattern from those files using unix grep and putting into text file and reading that text file to do post processing, But I want to automate task of reading each file and grepping pattern from that file. In the code below the mdp_cgdis_1102.txt have grepped pattern from directories. I would really appreciate any help
#!usr/bin/perl
use strict;
use warnings;
open FILE, 'mdp_cgdis_1102.txt' or die "Cannot open file $!";
my #array = <FILE>;
my #arr;
my #brr;
foreach my $i (#array){
#arr = split (/\//, $i);
#brr = split (/\:/, $i);
print " $arr[0] --- $brr[2]";
}

It is unclear to me which part of the process needs automating. I'll go by "want to automate reading each file and grepping pattern from that file," whereby you presumably already have a list of files. If you actually need to build the file list as well see the added code below.
One way: pull all patterns from each file and store that in a hash (filename => arrayref-with-patterns)
my %file_pattern;
foreach my $file (#filelist) {
open my $fh, '<', $file or die "Can't open $file: $!";
$file_pattern{$file} = [ grep { /$pattern/ } <$fh> ];
close $fh;
}
The [ ] takes a reference to the list returned by grep, ie. constructs an "anonymous array", and that (reference) is assigned as a value to the $file key.
Now you can process your patterns, per log file
foreach my $filename (sort keys %file_pattern) {
print "Processing log $filename.\n";
my #patterns = #{$file_pattern{$filename}};
# Process the list of patterns in this log file
}
ADDED
In order to build the list of files #filelist used above, from a known list of directories, use core File::Find
module which recursively scans supplied directories and applies supplied subroutines
use File::Find;
find( { wanted => \&process_logs, preprocess => \&select_logs }, #dir_list);
Your subroutine process_logs() is applied to each file/directory that passed preprocessing by the second sub, with its name available as $File::Find::name, and in it you can either populate the hash with patterns-per-log as shown above, or run complete processing as needed.
Your subroutine select_logs() contains code to filter log files from all files in each directory, that File::Find would normally processes, so that process_file() only gets the log files.
Another way would be to use the other invocation
find(\&process_all, #dir_list);
where now the sub process_all() is applied to all entries (files and directories) found and thus this sub itself needs to ensure that it only processes the log files. See linked documentation.

The equivalent of
find ... -name '*.txt' -type f -exec grep ... {} +
is
use File::Find::Rule qw( );
my $base_dir_qfn = ...;
my $re = qr/.../;
my #log_qfns =
File::Find::Rule
->name(qr/\..txt\z/)
->file
->in($base_dir_qfn);
my $success = 1;
for my $log_qfn (#log_qfns) {
open(my $fh, '<', $log_qfn)
or do {
$success = 0;
warn("Can't open log file \"$log_qfn\": $!\n);
next;
};
while (<$fh>) {
print if /$re/;
}
}
exit(1) if !$success;

Use File::Find to traverse the directory.
In a loop go through all the logfiles:
Open the file
read it line by line
For each line, do a regular expression match (
if ($line =~ /pattern/) ) or use
if (index($line, $searchterm) >= 0) if you are looking for a certain static string.
If you find a match, print the line.
close the file
I hope that gives you enough pointers to get started. You will learn more if you find out how to do each of these steps in Perl by yourself (I pointed out the hard ones).

Related

Why is this Perl foreach loop only executing only once?

I am trying to copy the content of three separate .vect files into one. I want to do this for all 5,000 files in the $fromdir directory.
When I run this program it generates just a single modified .vect file in the output directory. If I include the close(DATA) calls after individual while loops inside the foreach loop, I get the same behavior: a single output file in the output directory instead of the wanted 5,000 files.
I have done some reading, and at first thought I may not be opening the files. But if I print($vectfile) in the foreach loop every file name in the directory is printed.
My second thought was that it was how I was closing the files, but
I get the same behavior whether
I close the file handles inside or outside the foreach loop.
My final thought was maybe I don't have write permission to the file or directory, but I don't know how to change this.
How can I get this loop to run all 5,000 times and not just once?
use strict;
use warnings;
use feature qw(say);
my $dir = "D:\\Downloads";
# And M3.1 and P3.1
my $subfolder = "A0.1";
my $fromdir = $dir . "\\" . $subfolder;
my #files = <$fromdir/*vect>;
# Top of file
my $readfiletop = "C:\\Users\\Owner\\Documents\\MoreKnotVis\\ScriptsForAdditionalDataSets\\VectFileHeader.vect";
# Bottom of file
my $readfilebottom = "C:\\Users\\Owner\\Documents\\MoreKnotVis\\ScriptsForAdditionalDataSets\\VectFileCloser.vect";
foreach my $vectfile ( #files ) {
say("$vectfile");
my $count = 0;
my $readfilebody = $vectfile;
my $out_file = "D:\\Downloads\\ColorsA0.1\\" . "$count" . ".vect";
$count++;
# open top part of each file
open(DATA1, "<", $readfiletop) or die "Can't open '$readfiletop': $!";
# open bottom part of each file
open(DATA3, "<", $readfilebottom) or die "Can't open '$readfilebottom': $!";
# open a file to read
open(DATA2, "<", $vectfile) or die "Can't open '$vectfile': $!";
# open a file to write to
open(DATA4, ">" ,$out_file) or die "Can't open '$out_file': $!";
# Copy data from VectFileTop file to another.
while ( <DATA1> ) {
print DATA4 $_;
}
# Copy the data from VectFileBody to another.
while ( <DATA2> ) {
print DATA4 $_, $_ if 8..12;
}
# Copy the data from VectFileBottom to another.
while ( <DATA3> ) {
print DATA4 $_;
}
}
close( DATA1 );
close( DATA2 );
close( DATA3 );
close( DATA4 );
print("quit\n");
You construct the output file name including $count in it.
But note what you do with this variable:
initially, but inside the loop you set it to 0,
the output file name is constructed with 0 in it,
then you increment it, but this has no effect, because this variable
is again set to 0 in the next execution of the loop..
The effect is that:
the loop executes the required numer of times,
but the output file name every time contains 0 as the "number",
so you keep overwriting the same file with a new content.
Move my $count = 0; instruction before the loop and everything
should be OK.
You seem to be clinging to a specific form of code in fear of everything falling apart if you change a single thing. I recommend that you dare to stray a little more from the formula so that the code is more concise and readable
The problem is that you reset your $count to zero before processing each input file, so all the output files have the same name and overwrite one another. The remaining output file contains only the data from the last input file
Here's a refactoring of your code. I can't guarantee that it will run correctly but it looks right and does compile
I've added use autodie to avoid having to check the status of every IO operation
I've used the same lexical file handle $fh for all the input file. Opening another file on a file handle that is already open will close it first, and a lexical file handle will be closed by perl when it goes out of scope at the end of the block
I've used a while loop to iterate over the input file names instead of reading the whole list into an array which unnecessarily uses an additional variable #files and wastes space
I've used forward slashes instead of backslashes in all the file paths. This is fine in library calls on Windows: it is only a problem if they appear in command line input
I hope you'll agree that this form is more readable. I think you would have stood a much better chance of finding the problem if your code were in this form
use strict;
use warnings;
use autodie;
use feature qw/ say /;
my $indir = 'D:/Downloads';
my $subdir = 'A0.1'; # And M3.1 and P3.1
my $extrasdir = 'C:/Users/Owner/Documents/MoreKnotVis/ScriptsForAdditionalDataSets';
my $outdir = "$indir/Colors$subdir";
my $topfile = "$extrasdir/VectFileHeader.vect";
my $bottomfile = "$extrasdir/VectFileCloser.vect";
my $filenum;
while ( my $vectfile = glob "$indir/$subdir/*.vect" ) {
say qq/Processing "$vectfile"/;
$filenum++;
open my $outfh, '>', "$outdir/$filenum.vect";
my $fh;
open $fh, '<', $topfile;
print { $outfh } $_ while <$fh>;
open $fh, '<', $vectfile;
while ( <$fh> ) {
print { $outfh } $_, $_ if 8..12;
}
open $fh, '<', $bottomfile;
print { $outfh } $_ while <$fh>;
}
say 'DONE';

Unable to redirect the output of the system command to a file named error.log and stderr to another file named test_file.errorlog

This perl script is traversing all directories and sub directories, searching for a file named RUN in it. Then it opens the file and runs the 1st line written in the file. The problem is that I am not able to redirect the output of the system command to a file named error.log and STDERR to another file named test_file.errorlog, but no such file is created.
Note that all variable are declared if not found.
find (\&pickup_run,$path_to_search);
### Subroutine for extracting path of directories with RUN FILE PRESENT
sub pickup_run {
if ($File::Find::name =~/RUN/) {
### If RUN file is present , push it into array named run_file_present
push(#run_file_present,$File::Find::name);
}
}
###### Iterate over the array containing paths to directories containing RUN files one by one
foreach my $var (#run_file_present) {
$var =~ s/\//\\/g;
($path_minus_run=$var) =~ s/RUN\b//;
#print "$path_minus_run\n";
my $test_case_name;
($test_case_name=$path_minus_run) =~ s/expression to be replced//g;
chdir "$path_minus_run";
########While iterating over the paths, open each file
open data, "$var";
#####Run the first two lines containing commands
my #lines = <data>;
my $return_code=system (" $lines[0] >error.log 2>test_file.errorlog");
if($return_code) {
print "$test_case_name \t \t FAIL \n";
}
else {
print "$test_case_name \t \t PASS \n";
}
close (data);
}
The problem is almost certainly that $lines[0] has a newline at the end after being read from the file
But there are several improvements you could make
Always use strict and use warnings at the top of every Perl program, and declare all your variables using my as close as possible to their first point of use
Use the three-parameter form of open and always check whether it succeeded, putting the built-in variable $! into your die string to say why it failed. You can also use autodie to save writing the code for this manually for every open, but it requires Perl v5.10.1 or better
You shouldn't put quotes around scalar variables -- just used them as they are. so chdir $path_minus_run and open data, $var are correct
There is also no need to save all the files to be processed and deal with them later. Within the wanted subroutine, File::Find sets you up with $File::Find::dir set to the directory containing the file, and $_ set to the bare file name without a path. It also does a chdir to the directory for you, so the context is ideal for processing the file
use strict;
use warnings;
use v5.10.1;
use autodie;
use File::Find;
my $path_to_search;
find( \&pickup_run, $path_to_search );
sub pickup_run {
return unless -f and $_ eq 'RUN';
my $cmd = do {
open my $fh, '<', $_;
<$fh>;
};
chomp $cmd;
( my $test_name = $File::Find::dir ) =~ s/expression to be replaced//g;
my $retcode = system( "$cmd >error.log 2>test_file.errorlog" );
printf "%s\t\t%s\n", $test_name, $retcode ? 'FAIL' : 'PASS';
}

Process files by extension instead of individually

I have multiple files that have the extension .tdx.
Currently my program works on individual files using $ARGV[0], however the number of files are growing and I would like to use a wildcard based upon the file extension.
After much research I am at a loss.
I would like to read each file individually so the extract from the file is identified by the user.
#!C:\Perl\bin\perl.exe
use warnings;
use FileHandle;
open my $F_IN, '<', $ARGV[0] or die "Unable to open file: $!\n";
open my $F_OUT, '>', 'output.txt' or die "Unable to open file: $!\n";
while (my $line = $F_IN->getline) {
if ($line =~ /^User/) {
$F_OUT->print($line);
}
if ($line =~ /--FTP/) {
$F_OUT->print($line);
}
if ($line =~ /^ftp:/) {
$F_OUT->print($line);
}
}
close $F_IN;
close $F_OUT;
All the files are in one directory, so I assume I will need to open the directory.
I am just not sure how if I need to build an array of files or build a list and chomp it.
You have many options --
Loop over #ARGV, allowing the user to pass in a list of files
Use glob to pass in a pattern that perl will expand into a list of files (and then loop over that list, as in #1). This can be messy as they have to make sure to quote it so the shell doesn't interpolate it first.
Write some wrapper to call your existing script over and over again.
There's also a variant of the first one, which is to read from <>. This is set to either STDIN, or it'll automatically open the files named in #ARGV. See eof for an example of how to use it.
As an variant of #2, you can pass in a directory name, and use either opendir and readdir to loop over the list (making sure to grab only files with your extension, or at the very least ignore . and ..) or append /* or /*.tdx to it and use glob again.
The glob function can help you. Just try
my #files = glob '*.tdx';
for my $file (#files) {
# Process $file...
}
In list context, glob expands its argument to the list of file names that match the pattern. For details, see glob in perlfunc.
I never got glob to work. What I ended up doing was building an array based on the file extension .tdx. from there I copied the array to a filelist and read from that. What I ended up with is:
#!C:\Perl\bin\perl.exe
use warnings;
use FileHandle;
open my $F_OUT, '>', 'output.txt' or die "Unable to open file: $!\n";
open(FILELIST, "dir /b /s \"%USERPROFILE%\\Documents\\holding\\*.tdx\" |");
#filelist=<FILELIST>;
close(FILELIST);
foreach $file (#filelist)
{
chomp($file);
open my $F_IN, '<', $file or die "Unable to open file: $!\n";
while (my $line = $F_IN->getline)
{
Doing Something
}
close $F_IN;
}
close $F_OUT;
Thank you for your answers they helped in the learning experaince.
If you're on a Windows machine, putting in *.tdx on the command line might not work, nor may glob which historically used the shell's globbing abilities. (It now appears that the built in glob function now uses File::Glob, so that may no longer be an issue).
One thing you can do is not use globs, but allow the user to input the directories and suffixes they want. Then use opendir and readdir to go through the directories yourself.
use strict;
use warnings;
use feature qw(say);
use autodie;
use Getopt::Long; # Why not do it right?
use Pod::Usage; # It's about time to learn about POD documentation
my #suffixes; # Hey, why not let people put in more than one suffix?
my #directories; # Let people put in the directories they want to check
my $help;
GetOptions (
"suffix=s" => \#suffixes,
"directory=s" => \#directories,
"help" => \$help,
) or pod2usage ( -message => "Invalid usage" );
if ( not #suffixes ) {
#suffixes = qw(tdx);
}
if ( not #directories ) {
#directories = qw(.);
}
if ( $help ) {
pod2usage;
}
my $regex = join, "|", #suffixes;
$regex = "\.($regex)$"; # Will equal /\.(foo|bar|txt)$/ if Suffixes are foo, bar, txt
for my $directory ( #directories ) {
opendir my ($dir_fh), $directory; # Autodie will take care of this:
while ( my $file = readdir $dir_fh ) {
next unless -f $file;
next unless $file =~ /$regex/;
... Here be dragons ...
}
}
This will go through all of the directories your user input and then examines each entry. It uses the suffixes your user inputs (With .tdx being the default) to create a regular expression to check against the file name. If the file name matches the regular expression, do whatever you wanted to do with that file.

File are not getting rename in the same folder

I am trying to rename the existing file name with Kernel.txt on the basis of "Linux kernel Version" or "USB_STATE=DISCONNECTED". Script is running without any error but no output is coming. The changed file needs to be in the same folder(F1,F2,F3) as it was earlier.
Top dir: Log
SubDir: F1,F2,F3
F1: .bin file,.txt file,.jpg file
F2: .bin file,.txt file,.jpg file
F3: .bin file,.txt file,.jpg file
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use File::Basename;
use File::Spec;
use Cwd;
chdir('C:\\doc\\logs');
my $dir_01 = getcwd;
my $all_file=find ({ 'wanted' => \&renamefile }, $dir_01);
sub renamefile
{
if ( -f and /.txt?/ )
{
my #files = $_;
foreach my $file (#files)
{
open (FILE,"<" ,$file) or die"Can not open the file";
my #lines = <FILE>;
close FILE;
for my $line ( #lines )
{
if($line=~ /Linux kernel Version/gi || $line=~ /USB_STATE=DISCONNECTED/gi)
{
my $dirname = dirname($file); # file's directory, so we rename only the file itself.
my $file_name = basename($file); # File name fore renaming.
my $new_file_name = $file_name;
$new_file_name =~ s/.* /Kernal.txt/g; # replace the name with Kernal.txt
rename($file, File::Spec->catfile($dirname, $new_file_name)) or die $!;
}
}
}
}
}
This code looks a bit like cargo-cult programming. That is, some constructs are here without indication that you are understanding what this is doing.
chdir('C:\\doc\\logs');
my $dir_01 = getcwd;
Do yourself a favour and use forward slashes, even for Windows pathnames. This is generally supported.
Your directory diagram says that there is a top dir Log, yet you chdir to C:/doc/logs. What is it?
You do realize that $dir_01 is a very nondescriptive name, and is the path you just chdir'd to? Also, File::Find does not require you to start in the working directory. That is, the chdir is a bit useless here. You actually want:
my $start_directory = "C:/doc/Log"; # or whatever
my $all_file=find ({ 'wanted' => \&renamefile }, $dir_01);
I'm not sure what the return value of find would mean. But I'm sure that we don't have to put it into some unused variable.
When we provide key names with the => fat comma, we don't have to manually quote these keys. Therefore:
find({ wanted => \&renamefile }, $start_directory);
/.txt?/
This regex does the following:
match any character (that isn't a newline),
followed by literal tx,
and optionally a t. the ? is a zero-or-one quantifier.
If you want to match filenames that end with .txt, you should do
/\.txt$/
the \. matches a literal period. The $ anchors the regex at the end of the string.
my #files = $_;
foreach my $file (#files) {
...;
}
This would normally be written as
my $file = $_;
...;
You assign the value of $_ to the #files array, which then has one element: The $_ contents. Then you loop over this one element. Such loops don't deserve to be called loops.
open (FILE,"<" ,$file) or die"Can not open the file";
my #lines = <FILE>;
close FILE;
for my $line ( #lines )
{ ... }
Ah, where to begin?
Use lexical variables for file handles. These have the nice property of closing themselves.
For error handling, use autodie. If you really want to do it yourself, the error message should contain two important pieces of information:
the name of the file you couldn't open ($file)
the reason why the open failed ($!)
That would mean something like ... or die "Can't open $file: $!".
Don't read the whole file into an array and loop over that. Instead, be memory-efficient and iterate over the lines, using a while(<>)-like loop. This only reads one line at a time, which is much better.
Combined, this would look like
use autodie; # at the top
open my $fh, "<", $file;
LINE: while (<$fh>) {
...; # no $line variable, let's use $_ instead
}
Oh, and I labelled the loop (with LINE) for later reference.
if($line=~ /Linux kernel Version/gi || $line=~ /USB_STATE=DISCONNECTED/gi) { ... }
Putting the /g flag on regexes turns them into an iterator. you really don't want that. And I'm not quite sure if that case-insensitive matching is really neccessary. You can move the || or into the regex, with the regex alternation |. As we now use $_ to contain the lines, we don't have to manually bind the regex to a string. Therefore, we can write:
if (/Linux Kernel Version|USB_STATE=DISCONNECTED/i) { ... }
my $dirname = dirname($file); # file's directory, so we rename only the file itself.
my $file_name = basename($file); # File name fore renaming.
The by default, the original $_, and therefore our $file, only contains the filename, but not the directory. This isn't a problem: File::Find chdir'd into the correct directory. This makes our processing a lot easier. If you want to have the directory, use the $File::Find::dir variable.
my $new_file_name = $file_name;
$new_file_name =~ s/.* /Kernal.txt/g;
The /.* / regex says:
match anything up to including the last space
If this matches, replace the matched part with Kernal.txt.
The /g flag is completely useless here. Are you sure you don't want Kernel.txt with an e? And why the space in the filename? I don't quite understand that. If you want to rename the file to Kernel.txt, just assign that as a string, instead of doing weird stuff with substitutions:
my $new_file_name = "Kernel.txt";
rename($file, File::Spec->catfile($dirname, $new_file_name)) or die $!;
We already established that an error message should also include the filename, or even better: we should use automatic error handling.
Also, we are already in the correct directory, so we don't have to concatenate the new name with the directory.
rename $file => $new_file_name; # error handling by autodie
last LINE;
That should be enough. Also note that I leave the LINE loop. Once we renamed the file, there is no need to check the other lines as well.

script in perl to copy directory structure from the source to the destination

#!/usr/bin/perl -w
use File::Copy;
use strict;
my $i= "0";
my $j= "1";
my $source_directory = $ARGV[$i];
my $target_directory = $ARGV[$j];
#print $source_directory,"\n";
#print $target_directory,"\n";
my#list=process_files ($source_directory);
print "remaninign files\n";
print #list;
# Accepts one argument: the full path to a directory.
# Returns: A list of files that reside in that path.
sub process_files {
my $path = shift;
opendir (DIR, $path)
or die "Unable to open $path: $!";
# We are just chaining the grep and map from
# the previous example.
# You'll see this often, so pay attention ;)
# This is the same as:
# LIST = map(EXP, grep(EXP, readdir()))
my #files =
# Third: Prepend the full path
map { $path . '/' . $_}
# Second: take out '.' and '..'
grep { !/^\.{1,2}$/ }
# First: get all files
readdir (DIR);
closedir (DIR);
for (#files) {
if (-d $_) {
# Add all of the new files from this directory
# (and its subdirectories, and so on... if any)
push #files, process_files ($_);
} else { #print #files,"\n";
# for(#files)
while(#files)
{
my $input= pop #files;
print $input,"\n";
copy($input,$target_directory);
}
}
# NOTE: we're returning the list of files
return #files;
}
}
This basically copies files from source to destination but I need some guidance on how to
copy the directory as well. The main thing to note here is no CPAN modules are allowed except copy, move, and path
Instead of rolling your own directory processing adventure, why not simply use File::Find to go through the directory structure for you.
#! /usr/bin/env perl
use :5.10;
use warnings;
use File::Find;
use File::Path qw(make_path);
use File::Copy;
use Cwd;
# The first two arguments are source and dest
# 'shift' pops those arguments off the front of
# the #ARGV list, and returns what was removed
# I use "cwd" to get the current working directory
# and prepend that to $dest_dir. That way, $dest_dir
# is in correct relationship to my input parameter.
my $source_dir = shift;
my $dest_dir = cwd . "/" . shift;
# I change into my $source_dir, so the $source_dir
# directory isn't in the file name when I find them.
chdir $source_dir
or die qq(Cannot change into "$source_dir");;
find ( sub {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}, ".");
Now, you don't need a process_files subroutine. You let File::Find::find handle recursing the directory for you.
By the way, you could rewrite the find like this which is how you usually see it in the documentation:
find ( \&wanted, ".");
sub wanted {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}
I prefer to embed my wanted subroutine into my find command instead because I think it just looks better. It first of all guarantees that the wanted subroutine is kept with the find command. You don't have to look at two different places to see what's going on.
Also, the find command has a tendency to swallow up your entire program. Imagine where I get a list of files and do some complex processing on them. The entire program can end up in the wanted subroutine. To avoid this, you simply create an array of the files you want to operate on, and then operate on them inside your program:
...
my #file_list;
find ( \&wanted, "$source_dir" );
for my $file ( #file_list ) {
...
}
sub wanted {
return unless -f;
push #file_list, $File::Find::name;
}
I find this a programming abomination. First of all, what is going on with find? It's modifying my #file_list, but how? No where in the find command is #file_list mentioned. What is it doing?
Then at the end of my program is this sub wanted function that is using a variable, #file_list in a global manner. That's bad programming practice.
Embedding my subroutine directly into my find command solves many of these issues:
my #file_list;
find ( sub {
return unless -f;
push #file_list;
}, $source_dir );
for my $file ( #file_list ) {
...
}
This just looks better. I can see that #file_list is being manipulated directly by my find command. Plus, that pesky wanted subroutine has disappeared from the end of my program. Its' the exact same code. It just looks better.
Let's get to what that find command is doing and how it works with the wanted subroutine:
The find command finds each and every file, directory, link, or whatnot located in the directory list you pass to it. With each item it finds in that directory, it passes it to your wanted subroutine for processing. A return leaves the wanted subroutine and allows find to fetch the next item.
Each time the wanted subroutine is called, find sets three variables:
$File::Find::name: The name of the item found with the full path attached to it.
$File::Find::dir: The name of the directory where the item was found.
$_: The name of the item without the directory name.
In Perl, that $_ variable is very special. It's sort of a default variable for many commands. That is, you you execute a command, and don't give it a variable to use, that command will use $_. For example:
print
prints out $_
return if -f;
Is the same as saying this:
if ( -f $_ ) {
return;
}
This for loop:
for ( #file_list ) {
...
}
Is the same as this:
for $_ ( #file_list ) {
...
}
Normally, I avoid the default variable. It's global in scope and it's not always obvious what is being acted upon. However, there are a few circumstances where I'll use it because it really clarifies the program's meaning:
return unless -f;
in my wanted function is very obvious. I exit the wanted subroutine unless I was handed a file. Here's another:
return unless /\.txt$/;
This will exit my wanted function unless the item ends with '.txt'.
I hope this clarifies what my program is doing. Plus, I eliminated a few bugs while I was at it. I miscopied $File::Find::dir to $File::Find::name which is why you got the error.