So my program is supposed to recursively go through a directory and then for each file in the directory, open up the file and search for the words "error" "fail" and "failed." Then it should write the instances where these words occur, as well as the rest of the characters on the line after those words, out to a file designated in the command prompt. I have been having some trouble making sure the program performs the search on the files that are found in the directory. Right now it does recurse through the directory and even creates a file to write out to, however, it does not seem to be searching through the files found in the recursing. Here is my code:
#!/usr/local/bin/perl
use warnings;
use strict;
use File::Find;
my $argument2 = $ARGV[0];
my $dir = "c:/program/Scripts/Directory1"; #directory to search through
open FILE, ">>$argument2" or die $!; #file to write out
my $unsuccessful = 0;
my #errors = ();
my #failures= ();
my #failures2 = ();
my #name = ();
my #file;
my $file;
my $filename;
opendir(DIR, $dir) or die $!;
while($file = readdir(DIR)){
next if($file =~ m/^\./);
foreach(0..$#file){
print $_;
open(FILELIST, '<', $_);
while(<FILELIST>){
if (/Unsuccessful/i){
$unsuccessful = 1;
}
if(/ERROR/ ){
push(#errors, "ERROR in line $.\n");
print "\t\tERROR in line $.:$1\n" if (/Error\s+(.+)/);
}
if(/fail/i ){
push(#failures, "ERROR in line $.\n");
print FILE "ERROR in line $.:$1\n" if (/fail\s+(.+)/);
}
if(/failed/i ){
push(#failures2, "ERROR in line $.\n");
print FILE "ERROR in line $.:$1\n" if (/failed\s+(.+)/);
}
if ($unsuccessful){
}
}
close FILELIST;
}
}
closedir(DIR);
close FILE;
So, to clarify, my problem is that the search contained in the "while()" loop does not seem to be executing on the files found in the directory recursively. Any comments/suggestions/help that you can give on why this may be happening would be very helpful. I am new to Perl so some sample code would also just help me understand what you are trying to say. Thank you very much.
Typically, when I want to do something on recursive files, I start with find2perl . -print which generates the boilerplate for me with the wanted function from which I can modify to do whatever I want.
For example
# Traverse desired filesystems
File::Find::find({wanted => \&wanted}, '.');
exit;
sub wanted {
return unless -f $File::Find::name;
return unless -R $File::Find::name;
open (F,"<",$File::Find::name) or warn("Error opening $File::Find::name : $!\n");
while(<F>) {
if(m/error/) { print; }
if(m/fail/) { print; }
}
}
This is an example of a recursive perl directory listing. In reality, I would probably use file::find, or really just grep -R, but I am assuming this is homework of some kind:
use strict;
my $dir = $ARGV[0];
my $level = 0;
depthFirstDirectoryList($dir, $level);
sub depthFirstDirectoryList{
my ($dir, $level) = #_;
opendir (my $ind, $dir) or die "Can't open $dir for reading: $!\n";
while(my $file = readdir($ind)){
if(-d "$dir/$file" && $file ne "." && $file ne ".."){
depthFirstDirectoryList("$dir/$file", $level + 1);
}
else{
no warnings 'uninitialized';
print "\t" x $level . "file: $dir/$file\n";
}
}
}
Related
I am trying to read files inside a folder in Perl using Directory Handle. The script is able to show the file name but it is throwing two errors: readdir() attempted on invalid dirhandle DIR and closedir() attempted on invalid dirhandle DIR.
I am calling a subroutine and passing two values:
if($fileEnding eq "directory")
{
print "$fileName is a directory\n";
FolderInvestigator1($a, $fileName);
}
$a holds the directory name and its path which is being passed via command-line argument. I am passing the control to a subroutine.
Below is my code:-
sub FolderInvestigator1
{
my $prevPath = shift;
my $receivedFolder = shift;
my $realPath = "$prevPath/$receivedFolder";
my $path = File::Spec->rel2abs($realPath);
print "$path\n";
print "$receivedFolder Folder Received\n";
opendir(DIR, $path) or die "You've Passed Invalid Directory as Arguments\n";
while(my $fileName = readdir DIR)
{
next if $fileName =~ /^\./;
print "The Vacant Folder has $fileName file\n";
}
closedir(DIR);
}
Here is my complete code:-
FirstResponder();
sub FirstResponder
{
if (#ARGV == 0)
{
print "No Arguments Passed\n";
}
else
{
foreach my $a(#ARGV)
{
print "Investigating $a directory below:-\n";
opendir(DIR, $a) or die "You've Passed Invalid Directory as Arguments\n";
while(my $fileName = readdir DIR)
{
next if $fileName =~ /^\./;
$ending = `file --mime-type $a/$fileName`;
#print $ending;
$fileEnding = `basename -s $ending`;
#print $fileEnding;
chomp($fileEnding);
#print $fileName,"\n";
if($fileEnding eq "directory")
{
print "$fileName is a directory\n";
FolderInvestigator1($a, $fileName);
}
else
{
CureExtensions($a, $fileName);
}
}
closedir(DIR);
my #files = glob("$a/*");
my $size = #files;
if($size == 0)
{
print "The $a is an empty directory\n";
}
}
}#Foreach Ends Here..
}
Please see the screenshot for more information on what's going on!
I am not able to realize why Directory Handle is throwing error even though I made the path correct. Some guidance will be highly appreciated.
The problem with your code is that you have a nested use of the bareword (global) dir handle DIR, and hence the inner loop closes the handle before the outer loop is finished:
opendir(DIR, $arg) or die "...";
while(my $fileName = readdir DIR) {
# ... more code here
opendir(DIR, $path) or die "...";
while(my $file = readdir DIR) {
# ... more code here
}
closedir DIR;
}
closedir DIR;
Here is an example of how you could write the first loop using a lexical dir handle $DIR instead of using a legacy global bareword handle DIR:
use feature qw(say);
use strict;
use warnings;
use File::Spec;
FirstResponder();
sub FirstResponder {
foreach my $arg (#ARGV) {
print "Investigating $arg directory below:-\n";
opendir(my $DIR, $arg) or die "You've Passed Invalid Directory as Arguments\n";
my $size = 0;
while(my $fileName = readdir $DIR) {
next if $fileName =~ /^\./;
my $path = File::Spec->catfile( $arg, $fileName );
if( -d $path) {
print "$fileName is a directory\n";
say "FolderInvestigator1($arg, $fileName)"
}
else {
say "CureExtensions($arg, $fileName)";
}
$size++;
}
closedir $DIR;
if($size == 0) {
print "The $arg is an empty directory\n";
}
}
}
The use of bareword filehandle names is old style and deprecated, according to perldoc open:
An older style is to use a bareword as the filehandle, as
open(FH, "<", "input.txt")
or die "Can't open < input.txt: $!";
Then you can use FH as the filehandle, in close FH and and so on. Note that it's a global
variable, so this form is not recommended in new code.
See also:
Why does Perl open() documentation use two different FILEHANDLE style?
Don't Open Files in the old way
could anyone please share with me a snippet where i can do a grep search in perl file. Example,
i need this grep: grep 1115852311 /opt/files/treated/postpaid/*
to be done in perl script and print all the matches
tried the below, but did not work :
my $start_dir= "\opt\files\treated\postpaid\";
my $file_name = "*";
my #filematches;
opendir(DIR, "$start_dir");
#xml_files = grep(1115852311,readdir(DIR));
print #xml_files;
A good start would be to read the documentation for grep(). If you do, you'll see that Perl's grep() works rather differently to the Unix grep command. The Unix command just looks for text in a list of files. The Perl version works on any list of data and returns any elements in that list for which an arbitrary Boolean expression is true.
A Perl version of the Unix command would look something like this:
while (<$some_open_filehandle>) {
print if /$some_string/;
}
That's not quite what you want, but we can use it as a start. First, let's write something that takes a filename and string and checks whether the string appears in the file:
sub is_string_in_file {
my ($filename, $string) = #_;
open my $fh, '<', $filename or die "Cannot open file '$filename': $!\n";
return grep { /$string/ } <$fh>;
}
We can now use that in a loop which uses readdir() to get a list of files.
my #files;
my $dir = '/opt/files/treated/postpaid/';
opendir my $dh, $dir or die $!;
while (my $file = readdir($dh)) {
if (is_string_in_file("$dir$file", 1115852311) {
push #files, "$dir$file";
}
}
After running that code, the list of files that contain your string will be in #files.
You might want to look at glob() instead of opendir() and readdir().
used the below snippet to achieve what i wanted
#!/usr/bin/perl
use strict;
use warnings;
sub is_string_in_file {
my ($filename, $string) = #_;
open my $fh, '<', $filename
or die "Cannot open file \n";
while(my $line = <$fh>){
if($line =~ /$string/){
print $string;
print $filename."\n";
}
}
#return grep { $_ eq $string } <$fh>;
}
my #files;
my $dir = '/opt/files/treated/postpaid/';
opendir my $dh, $dir or die $!;
while (my $file = readdir($dh)) {
is_string_in_file("$dir$file", 1115852311);
}
Apologies if this is a bit long winded, bu i really appreciate an answer here as i am having difficulty getting this to work.
Building on from this question here, i have this script that works on a csv file(orig.csv) and provides a csv file that i want(format.csv). What I want is to make this more generic and accept any number of '.csv' files and provide a 'output_csv' for each inputed file. Can anyone help?
#!/usr/bin/perl
use strict;
use warnings;
open my $orig_fh, '<', 'orig.csv' or die $!;
open my $format_fh, '>', 'format.csv' or die $!;
print $format_fh scalar <$orig_fh>; # Copy header line
my %data;
my #labels;
while (<$orig_fh>) {
chomp;
my #fields = split /,/, $_, -1;
my ($label, $max_val) = #fields[1,12];
if ( exists $data{$label} ) {
my $prev_max_val = $data{$label}[12] || 0;
$data{$label} = \#fields if $max_val and $max_val > $prev_max_val;
}
else {
$data{$label} = \#fields;
push #labels, $label;
}
}
for my $label (#labels) {
print $format_fh join(',', #{ $data{$label} }), "\n";
}
i was hoping to use this script from here but am having great difficulty putting the 2 together:
#!/usr/bin/perl
use strict;
use warnings;
#If you want to open a new output file for every input file
#Do it in your loop, not here.
#my $outfile = "KAC.pdb";
#open( my $fh, '>>', $outfile );
opendir( DIR, "/data/tmp" ) or die "$!";
my #files = readdir(DIR);
closedir DIR;
foreach my $file (#files) {
open( FH, "/data/tmp/$file" ) or die "$!";
my $outfile = "output_$file"; #Add a prefix (anything, doesn't have to say 'output')
open(my $fh, '>', $outfile);
while (<FH>) {
my ($line) = $_;
chomp($line);
if ( $line =~ m/KAC 50/ ) {
print $fh $_;
}
}
close($fh);
}
the script reads all the files in the directory and finds the line with this string 'KAC 50' and then appends that line to an output_$file for that inputfile. so there will be 1 output_$file for every inputfile that is read
issues with this script that I have noted and was looking to fix:
- it reads the '.' and '..' files in the directory and produces a
'output_.' and 'output_..' file
- it will also do the same with this script file.
I was also trying to make it dynamic by getting this script to work in any directory it is run in by adding this code:
use Cwd qw();
my $path = Cwd::cwd();
print "$path\n";
and
opendir( DIR, $path ) or die "$!"; # open the current directory
open( FH, "$path/$file" ) or die "$!"; #open the file
**EDIT::I have tried combining the versions but am getting errors.Advise greatly appreciated*
UserName#wabcl13 ~/Perl
$ perl formatfile_QforStackOverflow.pl
Parentheses missing around "my" list at formatfile_QforStackOverflow.pl line 13.
source dir -> /home/UserName/Perl
Can't use string ("/home/UserName/Perl/format_or"...) as a symbol ref while "strict refs" in use at formatfile_QforStackOverflow.pl line 28.
combined code::
use strict;
use warnings;
use autodie; # this is used for the multiple files part...
#START::Getting current working directory
use Cwd qw();
my $source_dir = Cwd::cwd();
#END::Getting current working directory
print "source dir -> $source_dir\n";
my $output_prefix = 'format_';
opendir my $dh, $source_dir; #Changing this to work on current directory; changing back
for my $file (readdir($dh)) {
next if $file !~ /\.csv$/;
next if $file =~ /^\Q$output_prefix\E/;
my $orig_file = "$source_dir/$file";
my $format_file = "$source_dir/$output_prefix$file";
# .... old processing code here ...
## Start:: This part works on one file edited for this script ##
#open my $orig_fh, '<', 'orig.csv' or die $!; #line 14 and 15 above already do this!!
#open my $format_fh, '>', 'format.csv' or die $!;
#print $format_fh scalar <$orig_fh>; # Copy header line #orig needs changeing
print $format_file scalar <$orig_file>; # Copy header line
my %data;
my #labels;
#while (<$orig_fh>) { #orig needs changing
while (<$orig_file>) {
chomp;
my #fields = split /,/, $_, -1;
my ($label, $max_val) = #fields[1,12];
if ( exists $data{$label} ) {
my $prev_max_val = $data{$label}[12] || 0;
$data{$label} = \#fields if $max_val and $max_val > $prev_max_val;
}
else {
$data{$label} = \#fields;
push #labels, $label;
}
}
for my $label (#labels) {
#print $format_fh join(',', #{ $data{$label} }), "\n"; #orig needs changing
print $format_file join(',', #{ $data{$label} }), "\n";
}
## END:: This part works on one file edited for this script ##
}
How do you plan on inputting the list of files to process and their preferred output destination? Maybe just have a fixed directory that you want to process all the cvs files, and prefix the result.
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
my $source_dir = '/some/dir/with/cvs/files';
my $output_prefix = 'format_';
opendir my $dh, $source_dir;
for my $file (readdir($dh)) {
next if $file !~ /\.csv$/;
next if $file =~ /^\Q$output_prefix\E/;
my $orig_file = "$source_dir/$file";
my $format_file = "$source_dir/$output_prefix$file";
.... old processing code here ...
}
Alternatively, you could just have an output directory instead of prefixing the files. Either way, this should get you on your way.
Hi i want to read directories and sub-directories without knowing the directory name. Current directory is "D:/Temp". 'Temp' has sub-directories like 'A1','A2'. Again 'A1' has sub-directories like 'B1','B2'. Again 'B1' has sub-directories like 'C1','C2'. Perl script doesn't know these directories. So it has to first find directory and then read one file at a time in dir 'C1' once all files are read in 'C1' it should changes to dir 'C2'. I tried with below code here i don't want to read all files in array(#files) but need one file at time. In array #dir elements should be as fallows.
$dir[0] = "D:/Temp/A1/B1/C1"
$dir[1] = "D:/Temp/A1/B1/C2"
$dir[2] = "D:/Temp/A1/B2/C1"
Below is the code i tried.
use strict;
use File::Find::Rule;
use Data::Dumper;
my $dir = "D:/Temp";
my #dir = File::Find::Rule->directory->in($dir);
print Dumper (\#dir);
my $readDir = $dir[3];
opendir ( DIR, $readDir ) || die "Error in opening dir $readDir\n";
my #files = grep { !/^\.\.?$/ } readdir DIR;
print STDERR "files: #files \n\n";
for my $fil (#files) {
open (F, "<$fil");
read (F, my $data);
close (F);
print "$data";
}
use File::Find;
use strict;
use warnings;
my #dirs;
my %has_children;
find(sub {
if (-d) {
push #dirs, $File::Find::name;
$has_children{$File::Find::dir} = 1;
}
}, 'D:/Temp');
my #ends = grep {! $has_children{$_}} #dirs;
print "$_\n" for (#ends);
Your Goal: Find the absolute paths to those directories that do not themselves have child directories.
I'll call those directories of interest terminal directories. Here's the prototype for a function that I believe provides the convenience you are looking for. The function returns its result as a list.
my #list = find_terminal_directories($full_or_partial_path);
And here's an implementation of find_terminal_directories(). Note that this implementation does not require the use of any global variables. Also note the use of a private helper function that is called recursively.
On my Windows 7 system, for the input directory C:/Perl/lib/Test, I get the output:
== List of Terminal Folders ==
c:/Perl/lib/Test/Builder/IO
c:/Perl/lib/Test/Builder/Tester
c:/Perl/lib/Test/Perl/Critic
== List of Files in each Terminal Folder: ==
c:/Perl/lib/Test/Builder/IO/Scalar.pm
c:/Perl/lib/Test/Builder/Tester/Color.pm
c:/Perl/lib/Test/Perl/Critic/Policy.pm
Implementation
#!/usr/bin/env perl
use strict;
use warnings;
use Cwd qw(abs_path getcwd);
my #dir_list = find_terminal_directories("C:/Perl/lib/Test");
print "== List of Terminal Directories ==\n";
print join("\n", #dir_list), "\n";
print "\n== List of Files in each Terminal Directory: ==\n";
for my $dir (#dir_list) {
for my $file (<"$dir/*">) {
print "$file\n";
open my $fh, '<', $file or die $!;
my $data = <$fh>; # slurp entire file contents into $data
close $fh;
# Now, do something with $data !
}
}
sub find_terminal_directories {
my $rootdir = shift;
my #wanted;
my $cwd = getcwd();
chdir $rootdir;
find_terminal_directories_helper(".", \#wanted);
chdir $cwd;
return #wanted;
}
sub find_terminal_directories_helper {
my ($dir, $wanted) = #_;
return if ! -d $dir;
opendir(my $dh, $dir) or die "open directory error!";
my $count = 0;
foreach my $child (readdir($dh)) {
my $abs_child = abs_path($child);
next if (! -d $child || $child eq "." || $child eq "..");
++$count;
chdir $child;
find_terminal_directories_helper($abs_child, $wanted); # recursion!
chdir "..";
}
push #$wanted, abs_path($dir) if ! $count; # no sub-directories found!
}
Perhaps the following will be helpful:
use strict;
use warnings;
use File::Find::Rule;
my $dir = "D:/Temp";
local $/;
my #dirs =
sort File::Find::Rule->exec( sub { File::Find::Rule->directory->in($_) == 1 }
)->directory->in($dir);
for my $dir (#dirs) {
for my $file (<"$dir/*">) {
open my $fh, '<', $file or die $!;
my $data = <$fh>;
close $fh;
print $data;
}
}
local $/; lets us slurp the file's contents into a variable. Delete it if you only want to read the first line.
The sub in the exec() is used to pass only those dirs which don't contain a dir
sort is used to arrange those dirs in your wanted order
A file glob <"$dir/*"> is used to get the files in each dir
Edit: Have modified the code to find only 'terminal directories.' Thanks to DavidRR for this spec clarification.
I would use File::Find
Sample script:
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my $dir = "/home/chris";
find(\&wanted, $dir);
sub wanted {
print "dir: $File::Find::dir\n";
print "file in dir: $_\n";
print "complete path to file: $File::Find::name\n";
}
OUTPUTS:
$ test.pl
dir: /home/chris/test_dir
file in dir: test_dir2
complete path to file: /home/chris/test_dir/test_dir2
dir: /home/chris/test_dir/test_dir2
file in dir: foo.txt
complete path to file: /home/chris/test_dir/test_dir2/foo.txt
...
Using backticks, write subdirs and files to a file called filelist:
`ls -R $dir > filelist`
How do I get Perl to read the contents of a given directory into an array?
Backticks can do it, but is there some method using 'scandir' or a similar term?
opendir(D, "/path/to/directory") || die "Can't open directory: $!\n";
while (my $f = readdir(D)) {
print "\$f = $f\n";
}
closedir(D);
EDIT: Oh, sorry, missed the "into an array" part:
my $d = shift;
opendir(D, "$d") || die "Can't open directory $d: $!\n";
my #list = readdir(D);
closedir(D);
foreach my $f (#list) {
print "\$f = $f\n";
}
EDIT2: Most of the other answers are valid, but I wanted to comment on this answer specifically, in which this solution is offered:
opendir(DIR, $somedir) || die "Can't open directory $somedir: $!";
#dots = grep { (!/^\./) && -f "$somedir/$_" } readdir(DIR);
closedir DIR;
First, to document what it's doing since the poster didn't: it's passing the returned list from readdir() through a grep() that only returns those values that are files (as opposed to directories, devices, named pipes, etc.) and that do not begin with a dot (which makes the list name #dots misleading, but that's due to the change he made when copying it over from the readdir() documentation). Since it limits the contents of the directory it returns, I don't think it's technically a correct answer to this question, but it illustrates a common idiom used to filter filenames in Perl, and I thought it would be valuable to document. Another example seen a lot is:
#list = grep !/^\.\.?$/, readdir(D);
This snippet reads all contents from the directory handle D except '.' and '..', since those are very rarely desired to be used in the listing.
A quick and dirty solution is to use glob
#files = glob ('/path/to/dir/*');
This will do it, in one line (note the '*' wildcard at the end)
#files = </path/to/directory/*>;
# To demonstrate:
print join(", ", #files);
IO::Dir is nice and provides a tied hash interface as well.
From the perldoc:
use IO::Dir;
$d = IO::Dir->new(".");
if (defined $d) {
while (defined($_ = $d->read)) { something($_); }
$d->rewind;
while (defined($_ = $d->read)) { something_else($_); }
undef $d;
}
tie %dir, 'IO::Dir', ".";
foreach (keys %dir) {
print $_, " " , $dir{$_}->size,"\n";
}
So you could do something like:
tie %dir, 'IO::Dir', $directory_name;
my #dirs = keys %dir;
You could use DirHandle:
use DirHandle;
$d = new DirHandle ".";
if (defined $d)
{
while (defined($_ = $d->read)) { something($_); }
$d->rewind;
while (defined($_ = $d->read)) { something_else($_); }
undef $d;
}
DirHandle provides an alternative, cleaner interface to the opendir(), closedir(), readdir(), and rewinddir() functions.
Similar to the above, but I think the best version is (slightly modified) from "perldoc -f readdir":
opendir(DIR, $somedir) || die "can't opendir $somedir: $!";
#dots = grep { (!/^\./) && -f "$somedir/$_" } readdir(DIR);
closedir DIR;
You can also use the children method from the popular Path::Tiny module:
use Path::Tiny;
my #files = path("/path/to/dir")->children;
This creates an array of Path::Tiny objects, which are often more useful than just filenames if you want to do things to the files, but if you want just the names:
my #files = map { $_->stringify } path("/path/to/dir")->children;
Here's an example of recursing through a directory structure and copying files from a backup script I wrote.
sub copy_directory {
my ($source, $dest) = #_;
my $start = time;
# get the contents of the directory.
opendir(D, $source);
my #f = readdir(D);
closedir(D);
# recurse through the directory structure and copy files.
foreach my $file (#f) {
# Setup the full path to the source and dest files.
my $filename = $source . "\\" . $file;
my $destfile = $dest . "\\" . $file;
# get the file info for the 2 files.
my $sourceInfo = stat( $filename );
my $destInfo = stat( $destfile );
# make sure the destinatin directory exists.
mkdir( $dest, 0777 );
if ($file eq '.' || $file eq '..') {
} elsif (-d $filename) { # if it's a directory then recurse into it.
#print "entering $filename\n";
copy_directory($filename, $destfile);
} else {
# Only backup the file if it has been created/modified since the last backup
if( (not -e $destfile) || ($sourceInfo->mtime > $destInfo->mtime ) ) {
#print $filename . " -> " . $destfile . "\n";
copy( $filename, $destfile ) or print "Error copying $filename: $!\n";
}
}
}
print "$source copied in " . (time - $start) . " seconds.\n";
}
from: http://perlmeme.org/faqs/file_io/directory_listing.html
#!/usr/bin/perl
use strict;
use warnings;
my $directory = '/tmp';
opendir (DIR, $directory) or die $!;
while (my $file = readdir(DIR)) {
next if ($file =~ m/^\./);
print "$file\n";
}
The following example (based on a code sample from perldoc -f readdir) gets all the files (not directories) beginning with a period from the open directory. The filenames are found in the array #dots.
#!/usr/bin/perl
use strict;
use warnings;
my $dir = '/tmp';
opendir(DIR, $dir) or die $!;
my #dots
= grep {
/^\./ # Begins with a period
&& -f "$dir/$_" # and is a file
} readdir(DIR);
# Loop through the array printing out the filenames
foreach my $file (#dots) {
print "$file\n";
}
closedir(DIR);
exit 0;
closedir(DIR);
exit 0;