Why does readdir() list the filenames in wrong order? - perl

I'm using the following code to read filenames from a directory and push them onto an array:
#!/usr/bin/perl
use strict;
use warnings;
my $directory="/var/www/out-original";
my $filterstring=".csv";
my #files;
# Open the folder
opendir(DIR, $directory) or die "couldn't open $directory: $!\n";
foreach my $filename (readdir(DIR)) {
if ($filename =~ m/$filterstring/) {
# print $filename;
# print "\n";
push (#files, $filename);
}
}
closedir DIR;
foreach my $file (#files) {
print $file . "\n";
}
The output I get from running this code is:
Report_10_2014.csv
Report_04_2014.csv
Report_07_2014.csv
Report_05_2014.csv
Report_02_2014.csv
Report_06_2014.csv
Report_03_2014.csv
Report_01_2014.csv
Report_08_2014.csv
Report.csv
Report_09_2014.csv
Why is this code pushing the file names into the array in this order, and not from 01 to 10?

Unix directories are not stored in sorted order. Unix commands like ls and sh sort directory listings for you, but Perl's opendir function does not; it returns items in the same order the kernel does, which is based on the order they're stored in. If you want the results to be sorted, you'll need to do that yourself:
for my $filename (sort readdir(DIR)) {
(Btw: bareword file handles, like DIR, are global variables; it's considered good practice to use lexical file handles instead, like:
opendir my $dir, $directory or die "Couldn't open $directory: $!\n";
for my $filename (sort readdir($dir)) {
as a safety measure.)

Related

Unable to open files returned by readdir in Perl [duplicate]

This question already has answers here:
Why can't I open files returned by Perl's readdir?
(2 answers)
Closed 7 years ago.
I have a problem with a Perl script, as follows.
I must open and analyze all the *.txt files in a directory, but I cannot.
I can read file names that are saved in the #files array and printed, but I cannot open those files for reading.
This is my code:
my $dir= "../Scrivania/programmi" ;
opendir my ($dh), $dir;
my #files = grep { -f and /\.txt/i } readdir $dir;
closedir $dh;
for my $file ( #files ) {
$file = catfile($dir, $file);
print qq{Opening "$file"\n};
open my $fh, '<', $file;
# Do stuff with the data from $fh
print "sono nel foreach\n";
print " in : "."$fh\n";
#open(CANALI,$fh);
##righe=<CANALI>;
#close(CANALI);
#print "canali:"."#righe\n";
#foreach $canali (#righe)
#{
# $canali =~ /\d\d:\d\d (-) (.*)/;
# $ora= $1;
#
# if($hhSplit[0] == $ora)
# {
# push(#output, "$canali");
#
# }
#}
}
The main problem you have is that the file names returned by readdir have no path, so you're trying to open, say, x.txt when you should be opening ../Sc/direct/x.txt. The file doesn't exist in the current working directory so your open call fails
You also have a strange mixture of stuff in glob("$dir/(.*).txt/") which looks a little like a regex pattern, which glob doesn't understand. The value of $dir is a directory handle left open from the opendir on the first line. What you should be using is glob '../Sc/direct/*.txt', but then there's no need for the readdir
There are two ways to find the contents of a file. You can use opendir and readdir to read everything in the directory, or you can use glob
The first method returns only the bare name of each entry, which means you must concatenate each name with the path to the containing directory, preferably using catfile from File::Spec::Functions. It also includes the pseudo-directories . and .. so you must filter those out before you can use the list of names
glob has neither of these disadvantages. All the strings it returns are real directory entries, and they will include a path if you provided one in the pattern you passed as a parameter
You seem to have become rather muddled over the two, so I have written this program which differentiates between the two approaches. I hope it makes things clearer
use strict;
use warnings;
use v5.10.1;
use autodie;
use File::Spec::Functions qw/ catfile /;
my $dir = '../Sc/direct';
### Using glob
for my $file ( glob catfile($dir, '*.txt') ) {
print qq{Opening "$file"\n};
open my $fh, '<', $file;
# Do stuff with the data from $fh
}
### Using opendir / readdir
opendir my ($dh), $dir;
my #files = grep { -f and /\.txt$/i } readdir $dir;
closedir $dh;
for my $file ( #files ) {
$file = catfile($dir, $file);
print qq{Opening "$file"\n};
open my $fh, '<', $file;
# Do stuff with the data from $fh
}
Using $dir in the glob is incorrect. $dir is a GLOB type not a string value. Rather you should be looping over the #files array and looking for names that match what you want. Maybe something like so:
foreach my $fp (#files) {
if ($fp =~ /(.*).txt/) {
print "$fp is a .txt\n";
open (my $in, "<", $fp)
while (<$in>) ...
}
}

In Perl, how can filter all log files in a directory, and extract interesting lines?

I'm trying to select only the .log files in my directory and then search in those files for the word "unbound" and print the entire line into a new output file with the same name as the log file (number###.log) but with a .txt extension. This is what I have so far:
#!/usr/bin/perl
use strict;
use warnings;
my $path = $ARGV[0];
my $outpath = $ARGV[1];
my #files;
my $files;
opendir(DIR,$path) or die "$!";
#files = grep { /\.log$/} readdir(DIR);
my #out;
my $out;
opendir(OUT,$outpath) or die "$!";
my $line;
foreach $files (#files) {
open (FILE, "$files");
my #line = <FILE>;
my $regex = Unbound;
open (OUT, ">>$out");
print grep {$line =~ /$regex/ } <>;
}
close OUT;
close FILE;
closedir(DIR);
closedir (OUT);
I'm a beginner, and I don't really know how to create a new text file with the acquired output.
Few things I'd suggest to improve this code:
declare your loop iterators within the loop. foreach my $file ( #files ) {
use 3 arg open: open ( my $input_fh, "<", $filename );
use glob rather than opendir then grep. foreach my $file ( <$path/*.txt> ) {
grep is good for extracting things into arrays. Your grep reads the whole file to print it, which isn't necessary. Doesn't matter much if the file is short though.
perltidy is great for reformatting code.
you're opening 'OUT' to a directory path (I think?) which isn't going to work.
$outpath isn't, it's a file. You need to do something different to output to different files. opendir isn't really valid to an output.
because you're using opendir that's actually giving you filenames - not full paths. So you might be in the wrong place to actually open the files. Prepending the path name, doing a chdir are possible solutions. But that's one of the reasons I like glob because it returns a path as well.
So with that in mind - how about:
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
#Extract paths
my $input_path = $ARGV[0];
my $output_path = $ARGV[1];
#Error if paths are invalid.
unless (defined $input_path
and -d $input_path
and defined $output_path
and -d $output_path )
{
die "Usage: $0 <input_path> <output_path>\n";
}
foreach my $filename (<$input_path/*.log>) {
# extract the 'name' bit of the filename.
# be slightly careful with this - it's based
# on an assumption which isn't always true.
# File::Spec is a more powerful way of accomplishing this.
# but should grab 'number####' from /path/to/file/number####.log
my $output_file = basename ( $filename, '.log' );
#open input and output filehandles.
open( my $input_fh, "<", $filename ) or die $!;
open( my $output_fh, ">", "$output_path/$output_file.txt" ) or die $!;
print "Processing $filename -> $output_path/$output_file.txt\n";
#iterate input, extracting into $line
while ( my $line = <$input_fh> ) {
#check if $line matches your RE.
if ( $line =~ m/Unbound/ ) {
#write it to output.
print {$output_fh} $line;
}
}
#tidy up our filehandles. Although technically, they'll
#close automatically because they leave scope
close($output_fh);
close($input_fh);
}
Here is a script that takes advantage of Path::Tiny. Now, at this stage of your learning process, you are probably better off understanding #Sobrique's solution, but using modules such as Path::Tiny or Path::Class will make it easier to write these one off scripts more quickly, and correctly.
Also, I didn't really test this script, so watch out for bugs.
#!/usr/bin/env perl
use strict;
use warnings;
use Path::Tiny;
run(\#ARGV);
sub run {
my $argv = shift;
unless (#$argv == 2) {
die "Need source and destination paths\n";
}
my $it = path($argv->[0])->realpath->iterator({
recurse => 0,
follow_symlinks => 0,
});
my $outdir = path($argv->[1])->realpath;
while (my $path = $it->()) {
next unless -f $path;
next unless $path =~ /[.]log\z/;
my $logfh = $path->openr;
my $outfile = $outdir->child($path->basename('.log') . '.txt');
my $outfh;
while (my $line = <$logfh>) {
next unless $line =~ /Unbound/;
unless ($outfh) {
$outfh = $outfile->openw;
}
print $outfh $line;
}
close $outfh
or die "Cannot close output '$outfile': $!";
}
}
Notes
realpath will croak if the path provided does not exist.
Similarly for openr and openw.
I am reading input files line-by-line to keep the memory footprint of the program independent of the sizes of input files.
I do not open the output file until I know I have a match to print to.
When matching a file extension using a regular expression pattern, keep in mind that \n is a valid character in Unix file names, and the $ anchor will match it.

perl - loop through directory to find file.mdb and execute code if file.ldb not found

I am a beginner PERL programmer and I have come across a snag that I can't get by. I have been reading and re-reading web posts and Simon Cozens book at perl.org all day, but can't seem to solve the problem.
My intention with the code below is to loop through files in a directory and when the file has a certain string a name to verify that the same file name doesn't exist with a different extension and if it doesn't, to print me the file name (later I will implement a delete of the file, but for now I want to ensure it will work.) Specifically, I am finding .mdb files and after checking there are no associated .ldb's files, deleting the .mdb file.
right now my code returns this:
RRED_Database_KHOVIS.ldb
RRED_Database_KHOVIS.mdb
I will kill RRED_Database_KHOVIS.mdb
RRED_Database_mkuttler.mdb
I will kill RRED_Database_mkuttler.mdb
RRED_Database_SBreslow.ldb
RRED_Database_SBreslow.mdb
I will kill RRED_Database_SBreslow.mdb
i want it to only return the "I will kill..." after a .mdb file with no associated .ldb file.
My current code is below. I appreciate any help offered...
use strict;
use warnings;
use File::Find;
use diagnostics;
my $dir = "//vfg1msfs01ab/vfgcfs01\$/Regulatory Reporting/Access Database/";
my $filename = "RRED_Database";
my $fullname, my $ext;
opendir DH, $dir or die "Couldn't open the directory: $!";
while ($_ = readdir(DH)) {
my $ext = ".mdb";
if ((/$filename/) && ($_ ne $filename . $ext)) {
print "$_ \n";
unless (-e $dir . s/.mdb/.ldb/) {
s/.ldb/.mdb/;
print "I will kill $_ \n\n" ;
#unlink $_ or print "oops, couldn't delete $_: $!\n";
}
s/.ldb/.mdb/;
}
}
When looping through files, I like to use 'next' statements repeatedly to assure that I'm only looking at exactly what I want. Try this:
use strict;
use warnings;
use File::Find;
use diagnostics;
my $dir = "//vfg1msfs01ab/vfgcfs01\$/Regulatory Reporting/Access Database/";
my $filename = "RRED_Database";
my $fullname, my $ext;
opendir DH, $dir or die "Couldn't open the directory: $!";
while ($_ = readdir(DH)) {
my $ext = ".mdb";
# Jump to next while() iteration unless the file begins
# with $filename and ends with $ext,
# and capture the basename in $1
next unless $_ =~ m|($filename.*)$ext|;
# Jump to next while() iteration if if the file basename.ldb is found
next if -f $1 . ".ldb";
# At this point, we have an mdb file with no matching ldb file
print "$_ \n";
print "I will kill $_ \n\n" ;
#unlink $_ or print "oops, couldn't delete $_: $!\n";
}
While stuart's anwser made it more lean... I was able to also get it to work with the code below... (i changed .mdb to .accdb because I am now dealing with different file type)
use strict;
use warnings;
use File::Spec;
use diagnostics;
my $dir = "//vfg1msfs01ab/vfgcfs01\$/Regulatory Reporting/Access Database/";
my $filename = "RRED_Database";
my $ext;
opendir DH, $dir or die "Couldn't open the directory: $!";
while ($_ = readdir(DH)) {
my $ext = ".accdb";
if ((/$filename/) && ($_ ne $filename . $ext) && ($_ !~ /.laccdb/)) {
# if file contains database name, is not the main database and is not a locked version of db
s/$ext/.laccdb/;
unless (-e File::Spec->join($dir,$_)) {
s/.laccdb/$ext/;
#print "I will kill $_ \n\n";
unlink $_ or print "oops, couldn't delete $_: $!\n";
}
s/.laccdb/$ext/;
}
}

What is the most efficient way to open/act upon all of the files in a directory?

I need to perform my script (a search) on all the files of a directory. Here are the methods which work. I am just asking which is best. (I need file names of form: parsedchpt31_4.txt)
Glob:
my $parse_corpus; #(for all options)
##glob (only if all files in same directory as script?):
my #files = glob("parsed"."*.txt");
foreach my $file (#files) {
open($parse_corpus, '<', "$file") or die $!;
... all my code...
}
Readdir with while and conditions:
##readdir:
my $dir = '.';
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
next unless (-f "$dir/$file"); ##Ensure it's a file
next unless ($file =~ m/^parsed.*\.txt/); ##Ensure it's a parsed file
open($parse_corpus, '<', "$file") or die "Couldn't open directory $!";
... all my code...
}
Readdir with foreach and grep:
##readdir+grep:
my $dir = '.';
opendir(DIR, $dir) or die $!;
foreach my $file (grep {/^parsed.*\.txt/} readdir (DIR)) {
next unless (-f "$dir/$file"); ##Ensure it's a file
open($parse_corpus, '<', "$file") or die "Couldn't open directory $!";
... all my code...
}
File::Find:
##File::Find
my $dir = "."; ##current directory: could be (include quotes): '/Users/jon/Desktop/...'
my #files;
find(\&open_file, $dir); ##built in function
sub open_file {
push #files, $File::Find::name if(/^parsed.*\.txt/);
}
foreach my $file (#files) {
open($parse_corpus, '<', "$file") or die $!;
...all my code...
}
Is there another way? Is it good to enclose my entire script in the loops? Is it okay I don't use closedir? I'm passing this off to others, I'm not sure where their files will be (may not be able to use glob)
Thanks a lot, hopefully this is the right place to ask this.
The best or most efficient approach depends on your purposes and the larger context. Do you mean best in terms of raw speed, simplicity of the code, or something else? I'm skeptical that memory considerations should drive this choice. How many files are in the directory?
For sheer practicality, the glob approach works fairly well. Before resorting to anything more involved, I'd ask whether there is a problem.
If you're able to use other modules, another approach is to let someone else worry about the grubby details:
use File::Util qw();
my $fu = File::Util->new;
my #files = $fu->list_dir($dir, qw(--with-paths --files-only));
Note that File::Find performs a recursive search descending into all subdirectories. Many times you don't want or need that.
I would also add that I dislike your two readdir examples because they comingle different pieces of functionality: (1) getting file names, and (2) processing individual files. I would keep those jobs separate.
my $dir = '.';
opendir(my $dh, $dir) or die $!; # Use a lexical directory handle.
my #files =
grep { -f }
map { "$dir/$_" }
grep { /^parsed.*\.txt$/ }
readdir($dh);
for my $file (#files){
...
}
I think using a while loop is the safer answer. Why? Because loading all the file names into an array could mean a large memory usage, and using line-by-line operation avoids that problem.
I prefer readdir to glob, but that's probably more a matter of taste.
If performance is an issue, one could say that the -f check is unnecessary for any file with the .txt extension.
I find that a recursive directory walking function using the perfect partners opendir/readdir and File::chdir (my fav CPAN module, great for cross-platform) allows one to easily and clearly manipulate anything in a directory including subdirectories if desired (if not, omit the recursion).
Example (a simple deep ls):
#!/usr/bin/env perl
use strict;
use warnings;
use File::chdir; #Provides special variable $CWD
# assign $CWD sets working directory
# can be local to a block
# evaluates/stringifies to absolute path
# other great features
walk_dir(shift);
sub do_something {
print shift . "\n";
}
sub walk_dir {
my $dir = shift;
local $CWD = $dir;
opendir my $dh, $CWD; # lexical opendir, so no closedir needed
print "In: $CWD\n";
while (my $entry = readdir $dh) {
next if ($entry =~ /^\.+$/);
# other exclusion tests
if (-d $entry) {
walk_dir($entry);
} elsif (-f $entry) {
do_something($entry);
}
}
}

How can I add a prefix to all filenames under a directory?

I am trying to prefix a string (reference_) to the names of all the *.bmp files in all the directories as well sub-directories. The first time we run the silk script, it will create directories as well subdirectories, and under each subdirectory it will store each mobile application's sceenshot with .bmp extension.
When I run the automated silkscript for second time it will again create the *.bmp files in all the subdirectories. Before running the script for second time I want to prefix all the *.bmp with a string reference_.
For example first_screen.bmp to reference_first_screen.bmp,
I have the directory structure as below:
C:\Image_Repository\BG_Images\second
...
C:\Image_Repository\BG_Images\sixth
having first_screen.bmp and first_screen.bmp files etc...
Could any one help me out?
How can I prefix all the image file names with reference_ string?
When I run the script for second time, the Perl script in silk will take both the images from the sub-directory and compare them both pixel by pixel. I am trying with code below.
Could you please guide me how can I proceed to complete this task.
#!/usr/bin/perl -w
&one;
&two;
sub one {
use Cwd;
my $dir ="C:\\Image_Repository";
#print "$dir\n";
opendir(DIR,"+<$dir") or "die $!\n";
my #dir = readdir DIR;
#$lines=#dir;
delete $dir[-1];
print "$lines\n";
foreach my $item (#dir)
{
print "$item\n";
}
closedir DIR;
}
sub two {
use Cwd;
my $dir1 ="C:\\Image_Repository\\BG_Images";
#print "$dir1\n";
opendir(D,"+<$dir1") or "die $!\n";
my #dire = readdir D;
#$lines=#dire;
delete $dire[-1];
#print "$lines\n";
foreach my $item (#dire)
{
#print "$item\n";
$dir2="C:\\Image_Repository\\BG_Images\\$item";
print $dir2;
opendir(D1,"+<$dir2") or die " $!\n";
my #files=readdir D1;
#print "#files\n";
foreach $one (#files)
{
$one="reference_".$one;
print "$one\n";
#rename $one,Reference_.$one;
}
}
closedir DIR;
}
I tried open call with '+<' mode but I am getting compilation error for the read and write mode.
When I am running this code, it shows the files in BG_images folder with prefixed string but actually it's not updating the files in the sub-directories.
You don't open a directory for writing. Just use opendir without the mode parts of the string:
opendir my($dir), $dirname or die "Could not open $dirname: $!";
However, you don't need that. You can use File::Find to make the list of files you need.
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
use File::Find;
use File::Find::Closures qw(find_regular_files);
use File::Spec::Functions qw(catfile);
my( $wanted, $reporter ) = find_regular_files;
find( $wanted, $ARGV[0] );
my $prefix = 'recursive_';
foreach my $file ( $reporter->() )
{
my $basename = basename( $file );
if( index( $basename, $prefix ) == 0 )
{
print STDERR "$file already has '$prefix'! Skipping.\n";
next;
}
my $new_path = catfile(
dirname( $file ),
"recursive_$basename"
);
unless( rename $file, $new_path )
{
print STDERR "Could not rename $file: $!\n";
next;
}
print $file, "\n";
}
You should probably check out the File::Find module for this - it will make recursing up and down the directory tree simpler.
You should probably be scanning the file names and modifying those that don't start with reference_ so that they do. That may require splitting the file name up into a directory name and a file name and then prefixing the file name part with reference_. That's done with the File::Basename module.
At some point, you need to decide what happens when you run the script the third time. Do the files that already start with reference_ get overwritten, or do the unprefixed files get overwritten, or what?
The reason the files are not being renamed is that the rename operation is commented out. Remember to add use strict; at the top of your script (as well as the -w option which you did use).
If you get a list of files in an array #files (and the names are base names, so you don't have to fiddle with File::Basename), then the loop might look like:
foreach my $one (#files)
{
my $new = "reference_$one";
print "$one --> $new\n";
rename $one, $new or die "failed to rename $one to $new ($!)";
}
With the aid of find utility from coreutils for Windows:
$ find -iname "*.bmp" | perl -wlne"chomp; ($prefix, $basename) = split(m~\/([^/]+)$~, $_); rename($_, join(q(/), ($prefix, q(reference_).$basename))) or warn qq(failed to rename '$_': $!)"