Comparing two directories using Perl - perl

i am new to Perl so excuse my noobness,
Here's what i intend to do.
$ perl dirComp.pl dir1 dir2
dir1 & dir2 are directory names.
The script dirComp.pl should identify whether contents in dir1 & dir2 are identical or not.
I have come up with an algorithm
Store all the contents of dir1(recursively) in a list
Store all the contents of dir2 in another list
Compare the two list, if they are same - dir1 & dir2 are same else not.
my #files1 = readdir(DIR1h);
my #files2 = readdir(DIR2h);
# Remove filename extensions for each list.
foreach my $item (#files1) {
my ( $fileName, $filePath, $fileExt ) = fileparse($item, qr/\.[^.]*/);
$item = $fileName;
}
foreach my $item (#files2) {
my ( $fileName, $filePath, $fileExt ) = fileparse($item, qr/\.[^.]*/);
$item = $fileName;
}
I am not able to recursively traverse subdirectories in a given directory with the help of above code. Any help would be appreciated.
EDIT: Using File:DirCompare
#!/usr/bin/perl -w
use File::DirCompare;
use File::Basename;
if ($#ARGV < 1 )
{
&usage;
}
my $dir1 = $ARGV[0];
my $dir2 = $ARGV[1];
File::DirCompare->compare($dir1,$dir2,sub {
my ($a,$b) = #_;
if ( !$b )
{
printf "Test result:PASSED.\n";
printf "Only in %s : %s\n", dirname($a), basename($a);
}elsif ( !$a ) {
printf "Test result:PASSED.\n";
printf "Only in %s : %s\n", dirname($b), basename($b);
}else {
printf "Test result:FAILED.\n";
printf "Files $a and $b are different.\n";
}
});
I have a directory structure as below,
dir1/ dir2/
--file1.txt --file1.txt
--file2.txt --file2.txt
--file3.cpp --file3.cpp
I am facing Test result:FAILED. As the result must have been passed. Can anyone please correct me?
Thanks

The example you supplied using File::DirCompare works as intended.
Keep in mind that the callback subroutine is called for every unique file in each directory and for every pair of files which differ in their content. Having the same filename is not enough, the contents of each file in each directory must be exactly the same as well.
Furthermore, the cases in which you report "PASSED" aren't a success at all (by your definition) since they detail the cases in which a file exists in one of the directories, but not the other: meaning the directories' contents are not identical.
This should be closer to what you want:
#!/usr/bin/perl
use strict;
use warnings;
use File::DirCompare;
use File::Basename;
sub compare_dirs
{
my ($dir1, $dir2) = #_;
my $equal = 1;
File::DirCompare->compare($dir1, $dir2, sub {
my ($a,$b) = #_;
$equal = 0; # if the callback was called even once, the dirs are not equal
if ( !$b )
{
printf "File '%s' only exists in dir '%s'.\n", basename($a), dirname($a);
}
elsif ( !$a ) {
printf "File '%s' only exists in dir '%s'.\n", basename($b), dirname($b);
}
else
{
printf "File contents for $a and $b are different.\n";
}
});
return $equal;
}
print "Please specify two directory names\n" and exit if (#ARGV < 2);
printf "%s\n", &compare_dirs($ARGV[0], $ARGV[1]) ? 'Test: PASSED' : 'Test: FAILED';

I'd recommend using File::DirCompare module instead. ) It takes all the hard work of traversing the directory structure - you just need to define how your directories should be checked (should the sub compare the file contents, etc.)

You might want to try the ol' File::Find. It's not my favorite module. (It is just funky in the way it works), but for your purposes, it allows you to easily find all files in two directories, and compare them. Here's a brief example:
use strict;
use warnings;
use feature qw(say);
use Digest::MD5::File qw(file_md5_hex);
use File::Find;
use constant {
DIR_1 => "/usr/foo",
DIR_2 => "/usr/bar",
};
my %dir_1;
my %dir_2;
find ( sub {
if ( -f $File::Find::name ) {
$dir_1{$File::Find::name} = file_md5_hex($File::Find::name);
}
else {
$dir_1($file::Find::name} = "DIRECTORY!";
}
}, DIR_1);
find ( sub {
if ( -f $File::Find::name ) {
$dir_2{$File::Find::name} = file_md5_hex($File::Find::name);
}
else {
$dir_2($file::Find::name} = "DIRECTORY!";
}
}, DIR_2);
This will create two hashes keyed by the file names in each directory. I used the Digest::MD5::File to create a MD5 checksum. If the checksum between the two files differ, I know the files differ (although I don't know where).
Now you have to do three things:
Go through %dir_1 and see if there's an equivalent key in %dir_2. If there is not an equivalent key, you know that a file exists in %dir_1 and not %dir_2.
If there an equivalent key in each hash, check to see if the md5 checksums agree. If they do, then, the files match. If they don't they differ. You can't say where they differ, but they differ.
Finally, go through %dir_2 and check to see if there's an equivalent key in %dir_1. If there is, do nothing. If there isn't, that means there's a file in %dir_1 that's not in %dir_2.
Just a word of warning: The keys int these two hashes won't match. You'll have to transform one to the other when doing your compare. For example, you'll have two files as:
/usr/bar/my/file/is/here.txt
/usr/foo/my/file/is/here.txt
As you can see, my/file/is/here.txt exist in both directories, but in my code, the two hashes will have two different keys. You could either fix the two subroutines to strip the directory name off the front of the files paths, or when you do your comparison, transform one to the other. I didn't want to run through a full test. (The bit of code I wrote works in my testing), so I'm not 100% sure what you'll have to do to make sure you find the matching keys.
Oh, another warning: I pick up all entries and not just files. For directories, I can check to see if the hash key is equal to DIRECTORY! or not. I could simply ignore everything that's not a file.
And, you might want to check for special cases. Is this a link? Is it a hard link or a soft link? What about some sort of special file. That makes things a bit more complex. However, the basics are here.

Related

Perl Script did not find the newest file

I use the following perl script from "https://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/Check-Newest-files-age-and-size-in-Diredtory/"
But in these script is an Error. The script is not showing the newest file. Can someone find the mistake? In the comments of the site have wrote somebody, that in line 22 the mistake is. I can't find it:
Here is the code:
# Check that file exists (can be directory or link)
unless (-d $opt_f) {
print "FILE_AGE CRITICAL: Folder not found - $opt_f\n";
exit $ERRORS{'CRITICAL'};
}
my $list = opendir DIRHANDLE, $opt_f or die "Cant open directory: $!";
while ($_ = readdir(DIRHANDLE))
{
$file=sprintf("%s/%s",$opt_f,$_);
$attrs = stat("$file");
$diff = time()-$attrs->mtime;
if($temp == 0)
{
#$temp=$diff;
$new=$file;
}
if($_ ne "." && $_ ne "..")
{
if($diff<$temp)
{
$temp=$diff;
$new=$_;
}
else
{
$temp=$diff; $new=$_;
}
}
}
$st = File::stat::stat($opt_f."/".$new);
$age = time - $st->mtime;
$size = $st->size;
Example:
I have some files on a filer (backups in a .img File). I use this script, to check the newest file size. If I create a new folder with a new file, the check looks to the correct file. But if I create a second file, the check looks to the old file anytime. If I create a third file, the check goes to the correct file. The fourth file is wrong and the fifth file is correct again(and so on)
An easy (easier?) way to do this would be to use the built-in glob function to read the directory instead of opening it, and then use simple file tests to sort the files by creation or modification time:
my #files = sort {-M($a) <=> -M($b)} glob "*"; # or -C for creation
# $files[0] is the newest file
A list of file test operators is at
https://users.cs.cf.ac.uk/Dave.Marshall/PERL/node69.html
Note that -C and -M relate to when the script started, so for long-running or daemon scripts you might need to do something a bit different.
You want to find the earliest mtime, so we're talking about a simple comparison of the previously-found earlier mtime with the mtime of the current file. But there's so much code beyond that in what you posted ...and the first thing you do with the value you want to compare is change it? What?
Let's start over.
my $earliest_mtime = -1;
my $earliest_qfn;
while (defined( my $fn = readdir($dh) )) {
next if $fn =~ /^\.\.?\z/;
my $qfn = "$dir_qfn/$fn";
my $stats = stat($qfn)
or warn("Can't stat \"$qfn\": $!\n"), next;
my $mtime = $stats->mtime;
if ($mtime < $earliest_mtime) {
$earliest_mtime = $mtime;
$earliest_qfn = $qfn;
}
}
if (defined($earliest_qfn)) {
say $earliest_qfn;
}
The biggest issue with the script seems to be that line 12 calls the core version of stat but line 13 expects the output to be that of File::stat::stat(). I suspect that testing for '.' or '..' should be done at the top of the while loop and all the variables should be defined before they are used.
As Jeremy has said, you're better off sorting an array of the files and pushing/poping the first/last value, depending on what you're looking for.

How to normalize a path in Perl? (without checking the filesystem)

I want the Perl's equivalent of Python's os.path.normpath():
Normalize a pathname by collapsing redundant separators and up-level references so that A//B, A/B/, A/./B and A/foo/../B all become A/B. This string manipulation may change the meaning of a path that contains symbolic links. […]
For instance, I want to convert '/a/../b/./c//d' into /b/c/d.
The path I'm manipulating does NOT represent a real directory in the local file tree. There are no symlinks involved. So a plain string manipulation works fine.
I tried Cwd::abs_path and File::Spec, but they don't do what I want.
my $path = '/a/../b/./c//d';
File::Spec->canonpath($path);
File::Spec->rel2abs($path, '/');
# Both return '/a/../b/c/d'.
# They don't remove '..' because it might change
# the meaning of the path in case of symlinks.
Cwd::abs_path($path);
# Returns undef.
# This checks for the path in the filesystem, which I don't want.
Cwd::fast_abs_path($path);
# Gives an error: No such file or directory
Possibly related link:
Normalized directory paths - perlmonks: people discuss several approaches.
Given that File::Spec is almost what I needed, I ended up writing a function that removes ../ from File::Spec->canonpath(). The full code including tests is available as a GitHub Gist.
use File::Spec;
sub path_normalize_by_string_manipulation {
my $path = shift;
# canonpath does string manipulation, but does not remove "..".
my $ret = File::Spec->canonpath($path);
# Let's remove ".." by using a regex.
while ($ret =~ s{
(^|/) # Either the beginning of the string, or a slash, save as $1
( # Followed by one of these:
[^/]| # * Any one character (except slash, obviously)
[^./][^/]| # * Two characters where
[^/][^./]| # they are not ".."
[^/][^/][^/]+ # * Three or more characters
) # Followed by:
/\.\./ # "/", followed by "../"
}{$1}x
) {
# Repeat this substitution until not possible anymore.
}
# Re-adding the trailing slash, if needed.
if ($path =~ m!/$! && $ret !~ m!/$!) {
$ret .= '/';
}
return $ret;
}
My use case was normalizing include paths inside files relative to another path. For example, I might have a file at '/home/me/dita-ot/plugins/org.oasis-open.dita.v1_3/rng/technicalContent/rng/concept.rng' that includes the following file relative to itself:
<include href="../../base/rng/topicMod.rng"/>
and I needed the absolute path of that included file. (The including file path might be absolute or relative.)
Path::Tiny was promising, but I can only use core modules.
I tried using chdir to the include file location then using File::Spec->rel2abs() to resolve the path, but that was painfully slow on my system.
I ended up writing a subroutine to implement a simple string-based method of evaporating '../' components:
#!/usr/bin/perl
use strict;
use warnings;
use Cwd;
use File::Basename;
use File::Spec;
sub adjust_local_path {
my ($file, $relative_to) = #_;
return Cwd::realpath($file) if (($relative_to eq '.') || ($file =~ m!^\/!)); # handle the fast cases
$relative_to = dirname($relative_to) if (-f $relative_to);
$relative_to = Cwd::realpath($relative_to);
while ($file =~ s!^\.\./!!) { $relative_to =~ s!/[^/]+$!!; }
return File::Spec->catdir($relative_to, $file);
}
my $included_file = '/home/chrispy/dita-ot/plugins/org.oasis-open.dita.v1_3/rng/technicalContent/rng/topic.rng';
my $source_file = '.././base/rng/topicMod.rng';
print adjust_local_path($included_file, $source_file)."\n";
The result of the script above is
$ ./test.pl
/home/me/dita-ot-3.1.3/plugins/org.oasis-open.dita.v1_3/rng/technicalContent/base/rng/topicMod.rng
Using realpath() had the nice side-effect of resolving symlinks, which I needed. In the example above, dita-ot/ is a link to dita-ot-3.1.3/.
You can provide either a file or a path as the second argument; if it's a file, the directory path of that file is used. (This was convenient for my own purposes.)
Fixing Tom van der Woerdt code:
foreach my $path ("/a/b/c/d/../../../e" , "/a/../b/./c//d") {
my #c= reverse split m#/#, $path;
my #c_new;
while (#c) {
my $component= shift #c;
next unless length($component);
if ($component eq ".") { next; }
if ($component eq "..") {
my $i=0;
while ($c[$i] =~ m/^\.{0,2}$/) {
$i++
}
splice(#c, $i, 1);
next
}
push #c_new, $component;
}
print "/".join("/", reverse #c_new) ."\n";
}
Removing '.' and '..' from paths is pretty straight-forward if you process the path right-to-left :
my $path= "/a/../b/./c//d";
my #c= reverse split m#/#, $path;
my #c_new;
while (#c) {
my $component= shift #c;
next unless length($component);
if ($component eq ".") { next; }
if ($component eq "..") { shift #c; next }
push #c_new, $component;
}
say "/".join("/", reverse #c_new);
(Assumes the path starts with a /)
Note that this violates the UNIX pathname resolution standards, specifically this part :
A pathname that begins with two successive slashes may be interpreted in an implementation-defined manner, although more than two leading slashes shall be treated as a single slash.
The Path::Tiny module does exactly this:
use strict;
use warnings;
use 5.010;
use Path::Tiny;
say path('/a/../b/./c//d');
Output:
/b/c/d

How to find common parts in a paths with perl?

Having several paths, like:
1: /abc/def/some/common/part/xyz/file1.ext
2: /other/path/to/7433/qwe/some/common/part/anotherfile.ext
3: /misc/path/7433/qwe/some/common/part/filexx.ext
4: /2443/totally/different/path/file9988.ext
5: /abc/another/same/path/to/ppp/thisfile.ext
6: /deep1/deep2/another/same/path/to/diffone/filename.ext
I need find the common parts - each possible ones, eg. in the above if possible to found common parts:
/some/common/part/ - in the paths 1,2,3
/another/same/path/to/ - in the 5,6
/path/to/ - in the 2,5,6
/path/ - 2,3,4,5,6
etc..
I simply absoulutely haven't any idea how to solve this - what approach is good one
string based - somewhat find common parts of a string
list based - splitting all path into lists and somewhat compare arrays for common elements
tree-graph - somewhat find a common parts of a graph
other?
When i get some direction how to solve this problem, I'm (probably) able code it myself - so don't want free programmming service - but need some guiding how to start.
I'm sure than here is already some CPAN module what could help me, but I'm really have'nt idea how to find the right useful module from the list of 30k modules for the above problem. :(
EDIT - For what i need this:
Having approx. 200k files, in 10k directories and many of them "belong together", like:
/u/some/path/project1/subprojct/file1
/u/backup/of/work/date/project1/subproject/file2
/u/backup_of_backup/of/work/date/project1/subproject/file2
/u/new/addtions/to/projec1/subproject/file3
The files are dirrerent kind (pdf, images, doc, txt and so on), several are identical (like above file2 - easy to filter with Digest::MD5), but the only way "group them together" is based on "common parts" of a path - e.g. "project1/subproject" and so on..
Another files HAS the same MD5, so can filter out duplicates, but they are in different trees, like
/u/path/some/file
/u/path/lastest_project/menu/file
/u/path/jquery/menu/file
/u/path/example/solution/jquery/menu/file
so, the files are the same, (identical md5) but need somewhat move one copy to the right place (and delete others) and need somewhat determine the "most used" common paths, and collect tags... (old path elements are tags)
The idea behind is:
if the same md5 files are mostly stored under some common path - I can make a decision where to move one copy...
And it is more complicated, but for explanation is enough the above ;)
Simply need lowering the entropy on my HDD ;)
There is some discussion about finding the longest common consecutive substrings in this thread: http://www.nntp.perl.org/group/perl.fwp/2002/02/msg1662.html
The "winner" appears to be the following code, but there are a few other things in there you could try:
#!/usr/bin/perl
use strict;
use warnings;
sub lcs {
my $this = shift;
my $that = shift;
my $str = join "\0", $this, $that;
my $len = 1;
my $lcs;
while ($str =~ m{ ([^\0]{$len,}) (?= [^\0]* \0 [^\0]*? \1 ) }xg) {
$lcs = $1;
$len = 1 + length($1);
}
if ($len == 1) { print("No common substring\n"); }
else {
print("Longest common substring of length $len: \"");
print("$lcs");
print("\"\n");
}
}
Keep in mind you would have to adjust it a little bit to account for the fact that you only want entire subdirectories that match... ie, change if ($len == 1) to something like if ($len == 1 or $lcs !~ /^\// or $lcs !~ /\/$/)
You would also have to add some bookkeeping to keep track of which ones match. When I ran this code on your examples above it also found the /abc/ match in lines 1 & 5.
One thing that may or may not be a problem is that the following two lines:
/abc/another/same/path/to/ppp/thisfile.ext
/abc/another/different/path/to/ppp/otherfile.ext
Would match on:
/abc/another/
But not on:
/path/to/ppp/
But -- here's the bad news -- you will have to do O(n^2) comparisons with n=200,000 files. That could take an obscene amount of time.
Another solution would be to go through each path in your list, add all of its possible directory paths as keys to a hash and push the file itself to the hash (so that the value is an array of files that have this path in it). Something like this:
use strict;
use warnings;
my %links;
open my $fh, "<", 'filename' or die "Can't open $!";
while (my $line = <$fh>) {
chomp($line);
my #dirs = split /\//, $line;
for my $i (0..$#dirs) {
if ($i == $#dirs) {
push(#{ $links{$dirs[$i]} }, $line);
}
for my $j ($i+1..$#dirs) {
push(#{ $links{join("/",#dirs[$i..$j])} }, $line);
#PROCESS THIS if length of array is > 1
}
}
}
Of course, this would take an obscene amount of memory. With 200,000 files to process, you might have a hard time no matter what you try, but maybe you can break it up into more manageable chunks. Hopefully, this will give you a starting point.
To solve this problem, you need the correct data structure. A hash that counts the partial paths works well:
use File::Spec;
my %Count_of = ();
while( <DATA> ){
my #names = File::Spec->splitdir( $_ );
# remove file
pop #names;
# if absolute path, remove empty names at start
shift #names while length( $names[0] ) == 0;
# don't count blank lines
next unless #names;
# move two cursor thru the names,
# and count the partial parts
# created from one to the other
for my $i ( 0 .. $#names ){
for my $j ( $i .. $#names ){
my $partial_path = File::Spec->catdir( #names[ $i .. $j ] );
$Count_of{ $partial_path } ++;
}
}
}
# now display the results
for my $path ( sort { $Count_of{$b} <=> $Count_of{$a} || $a cmp $b } keys %Count_of ){
# skip if singleton.
next if $Count_of{ $path } <= 1;
printf "%3d : %s\n", $Count_of{ $path }, $path;
}
__DATA__
/abc/def/some/common/part/xyz/file1.ext
/other/path/to/7433/qwe/some/common/part/anotherfile.ext
/misc/path/7433/qwe/some/common/part/filexx.ext
/2443/totally/different/path/file9988.ext
/abc/another/same/path/to/ppp/thisfile.ext
/deep1/deep2/another/same/path/to/diffone/filename.ext

How to make recursive calls using Perl, awk or sed?

If a .cpp or .h file has #includes (e.g. #include "ready.h"), I need to make a text file that has these filenames on it. Since ready.h may have its own #includes, the calls have to be made recursively. Not sure how to do this.
The solution of #OneSolitaryNoob will likely work allright, but has an issue: for each recursion, it starts another process, which is quite wasteful. We can use subroutines to do that more efficiently. Assuming that all header files are in the working directory:
sub collect_recursive_includes {
# Unpack parameter from subroutine
my ($filename, $seen) = #_;
# Open the file to lexically scoped filehandle
# In your script, you'll probably have to transform $filename to correct path
open my $fh, "<", $filename or do {
# On failure: Print a warning, and return. I.e. go on with next include
warn "Can't open $filename: $!";
return;
};
# Loop through each line, recursing as needed
LINE: while(<$fh>) {
if (/^\s*#include\s+"([^"]+)"/) {
my $include = $1;
# you should probably normalize $include before testing if you've seen it
next LINE if $seen->{$include}; # skip seen includes
$seen->{$include} = 1;
collect_recursive_includes($include, $seen);
}
}
}
This subroutine remembers what files it has already seen, and avoids recursing there again—each file is visited one time only.
At the top level, you need to provide a hashref as second argument, that will hold all filenames as keys after the sub has run:
my %seen = ( $start_filename => 1 );
collect_recursive_includes($start_filename, \%seen);
my #files = sort keys %seen;
# output #files, e.g. print "$_\n" for #files;
I hinted in the code comments that you'll probabably have to normalize the filenames. E.g consider a starting filename ./foo/bar/baz.h, which points to qux.h. Then the actual filename we wan't to recurse to is ./foo/bar/qux.h, not ./qux.h. The Cwd module can help you find your current location, and to transform relative to absolute paths. The File::Spec module is a lot more complex, but has good support for platform-independent filename and -path manipulation.
In Perl, recursion is straightforward:
sub factorial
{
my $n = shift;
if($n <= 1)
{ return 1; }
else
{ return $n * factorial($n - 1); }
}
print factorial 7; # prints 7 * 6 * 5 * 4 * 3 * 2 * 1
Offhand, I can think of only two things that require care:
In Perl, variables are global by default, and therefore static by default. Since you don't want one function-call's variables to trample another's, you need to be sure to localize your variables, e.g. by using my.
There are some limitations with prototypes and recursion. If you want to use prototypes (e.g. sub factorial($) instead of just sub factorial), then you need to provide the prototype before the function definition, so that it can be used within the function body. (Alternatively, you can use & when you call the function recursively; that will prevent the prototype from being applied.)
Not totally clear what you want the display to look like, but the basic would be a script called follow_includes.pl:
#!/usr/bin/perl -w
while(<>) {
if(/\#include "(\S+)\"/) {
print STDOUT $1 . "\n";
system("./follow_includes.pl $1");
}
}
Run it like:
% follow_includes.pl somefile.cpp
And if you want to hide any duplicate includes, run it like:
% follow_includes.pl somefile.cpp | sort -u
Usually you'd want this in some sort of tree-print.

How to find a file which exists in different directories under a given path in Perl

I'm looking for a method to looks for file which resides in a few directories in a given path. In other words, those directories will be having files with same filename across. My script seem to have the hierarchy problem on looking into the correct path to grep the filename for processing. I have a fix path as input and the script will need to looks into the path and finding files from there but my script seem stuck on 2 tiers up and process from there rather than looking into the last directories in the tier (in my case here it process on "ln" and "nn" and start processing the subroutine).
The fix input path is:-
/nfs/disks/version_2.0/
The files that I want to do post processing by subroutine will be exist under several directories as below. Basically I wanted to check if the file1.abc do exists in all the directories temp1, temp2 & temp3 under ln directory. Same for file2.abc if exist in temp1, temp2, temp3 under nn directory.
The files that I wanted to check in full path will be like this:-
/nfs/disks/version_2.0/dir_a/ln/temp1/file1.abc
/nfs/disks/version_2.0/dir_a/ln/temp2/file1.abc
/nfs/disks/version_2.0/dir_a/ln/temp3/file1.abc
/nfs/disks/version_2.0/dir_a/nn/temp1/file2.abc
/nfs/disks/version_2.0/dir_a/nn/temp2/file2.abc
/nfs/disks/version_2.0/dir_a/nn/temp3/file2.abc
My script as below:-
#! /usr/bin/perl -w
my $dir = '/nfs/fm/disks/version_2.0/' ;
opendir(TEMP, $dir) || die $! ;
foreach my $file (readdir(TEMP)) {
next if ($file eq "." || $file eq "..") ;
if (-d "$dir/$file") {
my $d = "$dir/$file";
print "Directory:- $d\n" ;
&getFile($d);
&compare($file) ;
}
}
Note that I put the print "Directory:- $d\n" ; there for debug purposes and it printed this:-
/nfs/disks/version_2.0/dir_a/
/nfs/disks/version_2.0/dir_b/
So I knew it get into the wrong path for processing the following subroutine.
Can somebody help to point me where is the error in my script? Thanks!
To be clear: the script is supposed to recurse through a directory and look for files with a particular filename? In this case, I think the following code is the problem:
if (-d "$dir/$file") {
my $d = "$dir/$file";
print "Directory:- $d\n" ;
&getFile($d);
&compare($file) ;
}
I'm assuming the &getFile($d) is meant to step into a directory (i.e., the recursive step). This is fine. However, it looks like the &compare($file) is the action that you want to take when the object that you're looking at isn't a directory. Therefore, that code block should look something like this:
if (-d "$dir/$file") {
&getFile("$dir/$file"); # the recursive step, for directories inside of this one
} elsif( -f "$dir/$file" ){
&compare("$dir/$file"); # the action on files inside of the current directory
}
The general pseudo-code should like like this:
sub myFind {
my $dir = shift;
foreach my $file( stat $dir ){
next if $file -eq "." || $file -eq ".."
my $obj = "$dir/$file";
if( -d $obj ){
myFind( $obj );
} elsif( -f $obj ){
doSomethingWithFile( $obj );
}
}
}
myFind( "/nfs/fm/disks/version_2.0" );
As a side note: this script is reinventing the wheel. You only need to write a script that does the processing on an individual file. You could do the rest entirely from the shell:
find /nfs/fm/disks/version_2.0 -type f -name "the-filename-you-want" -exec your_script.pl {} \;
Wow, it's like reliving the 1990s! Perl code has evolved somewhat, and you really need to learn the new stuff. It looks like you learned Perl in version 3.0 or 4.0. Here's some pointers:
Use use warnings; instead of -w on the command line.
Use use strict;. This will require you to predeclare variables using my which will scope them to the local block or the file if they're not in a local block. This helps catch a lot of errors.
Don't put & in front of subroutine names.
Use and, or, and not instead of &&, ||, and !.
Learn about Perl Modules which can save you a lot of time and effort.
When someone says detect duplicates, I immediately think of hashes. If you use a hash based upon your file's name, you can easily see if there are duplicate files.
Of course a hash can only have a single value for each key. Fortunately, in Perl 5.x, that value can be a reference to another data structure.
So, I recommend you use a hash that contains a reference to a list (array in old parlance). You can push each instance of the file to that list.
Using your example, you'd have a data structure that looks like this:
%file_hash = {
file1.abc => [
/nfs/disks/version_2.0/dir_a/ln/temp1
/nfs/disks/version_2.0/dir_a/ln/temp2
/nfs/disks/version_2.0/dir_a/ln/temp3
],
file2.abc => [
/nfs/disks/version_2.0/dir_a/nn/temp1
/nfs/disks/version_2.0/dir_a/nn/temp2
/nfs/disks/version_2.0/dir_a/nn/temp3
];
And, here's a program to do it:
#! /usr/bin/env perl
#
use strict;
use warnings;
use feature qw(say); #Can use `say` which is like `print "\n"`;
use File::Basename; #imports `dirname` and `basename` commands
use File::Find; #Implements Unix `find` command.
use constant DIR => "/nfs/disks/version_2.0";
# Find all duplicates
my %file_hash;
find (\&wanted, DIR);
# Print out all the duplicates
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
say " $dir_name";
}
}
}
sub wanted {
return if not -f $_;
if (not exists $file_hash{$_}) {
$file_hash{$_} = [];
}
push #{$file_hash{$_}}, $File::Find::dir;
}
Here's a few things about File::Find:
The work takes place in the subroutine wanted.
The $_ is the name of the file, and I can use this to see if this is a file or directory
$File::Find::Name is the full name of the file including the path.
$File::Find::dir is the name of the directory.
If the array reference doesn't exist, I create it with the $file_hash{$_} = [];. This isn't necessary, but I find it comforting, and it can prevent errors. To use $file_hash{$_} as an array, I have to dereference it. I do that by putting a # in front of it, so it can be #$file_hash{$_} or, #{$file_hash{$_}}.
Once all the file are found, I can print out the entire structure. The only thing I do is check to make sure there is more than one member in each array. If there's only a single member, then there are no duplicates.
Response to Grace
Hi David W., thank you very much for your explainaion and sample script. Sorry maybe I'm not really clear in definding my problem statement. I think I can't use hash in my path finding for the data structure. Since the file*.abc is a few hundred and undertermined and each of the file*.abc even is having same filename but it is actually differ in content in each directory structures.
Such as the file1.abc resides under "/nfs/disks/version_2.0/dir_a/ln/temp1" is not the same content as file1.abc resides under "/nfs/disks/version_2.0/dir_a/ln/temp2" and "/nfs/disks/version_2.0/dir_a/ln/temp3". My intention is to grep the list of files*.abc in each of the directories structure (temp1, temp2 and temp3 ) and compare the filename list with a masterlist. Could you help to shed some lights on how to solve this? Thanks. – Grace yesterday
I'm just printing the file in my sample code, but instead of printing the file, you could open them and process them. After all, you now have the file name and the directory. Here's the heart of my program again. This time, I'm opening the file and looking at the content:
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
#say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
#say " $dir_name";
open (my $fh, "<", "$dir_name/$file_name")
or die qq(Can't open file "$dir_name/$file_name" for reading);
# Process your file here...
close $fh;
}
}
}
If you are only looking for certain files, you could modify the wanted function to skip over files you don't want. For example, here I am only looking for files which match the file*.txt pattern. Note I use a regular expression of /^file.*\.txt$/ to match the name of the file. As you can see, it's the same as the previous wanted subroutine. The only difference is my test: I'm looking for something that is a file (-f) and has the correct name (file*.txt):
sub wanted {
return if not -f $_ and /^file.*\.txt$/;
if (not exists $file_hash{$_}) {
$file_hash{$_} = [];
}
push #{$file_hash{$_}}, $File::Find::dir;
}
If you are looking at the file contents, you can use the MD5 hash to determine if the file contents match or don't match. This reduces a file to a mere string of 16 to 28 characters which could even be used as a hash key instead of the file name. This way, files that have matching MD5 hashes (and thus matching contents) would be in the same hash list.
You talk about a "master list" of files and it seems you have the idea that this master list needs to match the content of the file you're looking for. So, I'm making a slight mod in my program. I am first taking that master list you talked about, and generating MD5 sums for each file. Then I'll look at all the files in that directory, but only take the ones with the matching MD5 hash...
By the way, this has not been tested.
#! /usr/bin/env perl
#
use strict;
use warnings;
use feature qw(say); #Can use `say` which is like `print "\n"`;
use File::Find; #Implements Unix `find` command.
use Digest::file qw(digest_file_hex);
use constant DIR => "/nfs/disks/version_2.0";
use constant MASTER_LIST_DIR => "/some/directory";
# First, I'm going thorugh the MASTER_LIST_DIR directory
# and finding all of the master list files. I'm going to take
# the MD5 hash of those files, and store them in a Perl hash
# that's keyed by the name of file file. Thus, when I find a
# file with a matching name, I can compare the MD5 of that file
# and the master file. If they match, the files are the same. If
# not, they're different.
# In this example, I'm inlining the function I use to find the files
# instead of making it a separat function.
my %master_hash;
find (
{
%master_hash($_) = digest_file_hex($_, "MD5") if -f;
},
MASTER_LIST_DIR
);
# Now I have the MD5 of all the master files, I'm going to search my
# DIR directory for the files that have the same MD5 hash as the
# master list files did. If they do have the same MD5 hash, I'll
# print out their names as before.
my %file_hash;
find (\&wanted, DIR);
# Print out all the duplicates
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
say " $dir_name";
}
}
}
# The wanted function has been modified since the last example.
# Here, I'm only going to put files in the %file_hash if they
sub wanted {
if (-f $_ and $file_hash{$_} = digest_file_hex($_, "MD5")) {
$file_hash{$_} //= []; #Using TLP's syntax hint
push #{$file_hash{$_}}, $File::Find::dir;
}
}