How to list all files in the directories - perl

how can i list all files in parent and sub directories for multilpe dirs?
$dir="/home/httpd/cgi-bin/r/met";
opendir(DIR,"/home/httpd/cgi-bin/r/met")||die"error";
while($line=readdir DIR)
{
print"$line\n";
opendir DIR1,"$dir/$line"||die"error";
while($line1=readdir DIR1)
{
print"$line1\n";
}
}
closedir DIR;
closedir DIR1;

Don't do it this way, use File::Find instead.
use strict;
use warnings;
use File::Find;
my $search = "/home/httpd/cgi-bin/r/met";
sub print_file_names {
print $_,"\n";
}
find ( \&print_file_names, $search );

File::Find effectively walks through list of directories and executes a subroutine defined by you for each file or directory found recursively below the starting directory. Before calling your subroutine find (a function exported by File::Find module) by default changes to the directory being scanned and sets the following (global) variables:
$File::Find::dir -- visited directory path relative to the starting directory
$File::Find::name -- full path of the file being visited relative to the starting directory
$_ -- basename of the file being visited (used in my example)
One way to solve your problem would be:
#!/usr/bin/perl
# Usage: ffind [dir1 ...]
use strict; use warnings;
use 5.010; # to be able to use say
use File::Find;
# use current working directory if no command line argument
#ARGV = qw(.) unless #ARGV;
find( sub { say if -f }, #ARGV );

Related

Multiple level iteration inside folder to find all the text files on Windows

I have a scenario where I need to loop through multiple level directories to find text files.
Say, I start from the folder C:\A and then I want to read all the text files inside it. But they are not placed at same level.
Some text files are arranged at C:\A\A1\sample1.txt
Some as C:\A\sample2.txt
Some as C:\A\A2\A3\sample.txt
I am able to loop inside the folder A which returns me A1 and A2, but I wanted to know is there a way it would iterate automatically to every level and return me the text files along with its path.
ance!
I'd use File::Find::Rule:
#!/usr/bin/env perl
use strict;
use warnings;
use File::Find::Rule;
my #files = File::Find::Rule -> file()
-> name('sample*.txt')
-> in ( 'C:\\A' );
foreach my $file ( #files ) {
print "Found: $file\n";
#process it here.
}
Use the File::Find module to do it.
use strict;
use warnings 'all';
use File::Find 'find';
my #files;
find(sub {
push #files, $File::Find::name if -f and /\.txt$/i;
}, 'C:\A');
print "$_\n" for #files;

Need to loop through directory and all of it's subdirectories to find files at a certain in Perl

I am attempting to loop through a directory and all of its sub-directories to see if the files within those directories are a certain size. But I am not sure if the files in the #files array still contains the file size so I can compare the size( i.e. - size <= value_size ). Can someone offer any guidance?
use strict;
use warnings;
use File::Find;
use DateTime;
my #files;
my $dt = DateTime->now;
my $date = $dt->ymd;
my $start_dir = "/apps/trinidad/archive/in/$date";
my $empty_file = 417;
find( \&wanted, $start_dir);
for my $file( #files )
{
if(`ls -ltr | awk '{print $5}'`<= $empty_file)
{
print "The file $file appears to be empty please check within the folder if this is empty"
}
else
return;
}
exit;
sub wanted {
push #files, $File::Find::name unless -d;
return;
}
I think you could use this code instead of shelling out to awk.
(Don't understand why my empty_file = 417; is an empty file size).
if (-s $file <= $empty_file)
Also notice that you are missing an open and close brace for your else branch.
(Unsure why you want to 'return' if the first file found that is not 'empty' branches to the return which doesn't do anything because return is only used to return from a function).
The exit is unnecessary and the return in the wanted function is unnessary.
Update: A File::Find::Rule solution could be used. Here is a small program that captures all files less than 14 bytes in my current directory and all of it's subdirectories.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use File::Find::Rule;
my $dir = '.';
my #files = find( file => size => "<14", in => $dir);
say -s $_, " $_" for #files;

script in perl to copy directory structure from the source to the destination

#!/usr/bin/perl -w
use File::Copy;
use strict;
my $i= "0";
my $j= "1";
my $source_directory = $ARGV[$i];
my $target_directory = $ARGV[$j];
#print $source_directory,"\n";
#print $target_directory,"\n";
my#list=process_files ($source_directory);
print "remaninign files\n";
print #list;
# Accepts one argument: the full path to a directory.
# Returns: A list of files that reside in that path.
sub process_files {
my $path = shift;
opendir (DIR, $path)
or die "Unable to open $path: $!";
# We are just chaining the grep and map from
# the previous example.
# You'll see this often, so pay attention ;)
# This is the same as:
# LIST = map(EXP, grep(EXP, readdir()))
my #files =
# Third: Prepend the full path
map { $path . '/' . $_}
# Second: take out '.' and '..'
grep { !/^\.{1,2}$/ }
# First: get all files
readdir (DIR);
closedir (DIR);
for (#files) {
if (-d $_) {
# Add all of the new files from this directory
# (and its subdirectories, and so on... if any)
push #files, process_files ($_);
} else { #print #files,"\n";
# for(#files)
while(#files)
{
my $input= pop #files;
print $input,"\n";
copy($input,$target_directory);
}
}
# NOTE: we're returning the list of files
return #files;
}
}
This basically copies files from source to destination but I need some guidance on how to
copy the directory as well. The main thing to note here is no CPAN modules are allowed except copy, move, and path
Instead of rolling your own directory processing adventure, why not simply use File::Find to go through the directory structure for you.
#! /usr/bin/env perl
use :5.10;
use warnings;
use File::Find;
use File::Path qw(make_path);
use File::Copy;
use Cwd;
# The first two arguments are source and dest
# 'shift' pops those arguments off the front of
# the #ARGV list, and returns what was removed
# I use "cwd" to get the current working directory
# and prepend that to $dest_dir. That way, $dest_dir
# is in correct relationship to my input parameter.
my $source_dir = shift;
my $dest_dir = cwd . "/" . shift;
# I change into my $source_dir, so the $source_dir
# directory isn't in the file name when I find them.
chdir $source_dir
or die qq(Cannot change into "$source_dir");;
find ( sub {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}, ".");
Now, you don't need a process_files subroutine. You let File::Find::find handle recursing the directory for you.
By the way, you could rewrite the find like this which is how you usually see it in the documentation:
find ( \&wanted, ".");
sub wanted {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}
I prefer to embed my wanted subroutine into my find command instead because I think it just looks better. It first of all guarantees that the wanted subroutine is kept with the find command. You don't have to look at two different places to see what's going on.
Also, the find command has a tendency to swallow up your entire program. Imagine where I get a list of files and do some complex processing on them. The entire program can end up in the wanted subroutine. To avoid this, you simply create an array of the files you want to operate on, and then operate on them inside your program:
...
my #file_list;
find ( \&wanted, "$source_dir" );
for my $file ( #file_list ) {
...
}
sub wanted {
return unless -f;
push #file_list, $File::Find::name;
}
I find this a programming abomination. First of all, what is going on with find? It's modifying my #file_list, but how? No where in the find command is #file_list mentioned. What is it doing?
Then at the end of my program is this sub wanted function that is using a variable, #file_list in a global manner. That's bad programming practice.
Embedding my subroutine directly into my find command solves many of these issues:
my #file_list;
find ( sub {
return unless -f;
push #file_list;
}, $source_dir );
for my $file ( #file_list ) {
...
}
This just looks better. I can see that #file_list is being manipulated directly by my find command. Plus, that pesky wanted subroutine has disappeared from the end of my program. Its' the exact same code. It just looks better.
Let's get to what that find command is doing and how it works with the wanted subroutine:
The find command finds each and every file, directory, link, or whatnot located in the directory list you pass to it. With each item it finds in that directory, it passes it to your wanted subroutine for processing. A return leaves the wanted subroutine and allows find to fetch the next item.
Each time the wanted subroutine is called, find sets three variables:
$File::Find::name: The name of the item found with the full path attached to it.
$File::Find::dir: The name of the directory where the item was found.
$_: The name of the item without the directory name.
In Perl, that $_ variable is very special. It's sort of a default variable for many commands. That is, you you execute a command, and don't give it a variable to use, that command will use $_. For example:
print
prints out $_
return if -f;
Is the same as saying this:
if ( -f $_ ) {
return;
}
This for loop:
for ( #file_list ) {
...
}
Is the same as this:
for $_ ( #file_list ) {
...
}
Normally, I avoid the default variable. It's global in scope and it's not always obvious what is being acted upon. However, there are a few circumstances where I'll use it because it really clarifies the program's meaning:
return unless -f;
in my wanted function is very obvious. I exit the wanted subroutine unless I was handed a file. Here's another:
return unless /\.txt$/;
This will exit my wanted function unless the item ends with '.txt'.
I hope this clarifies what my program is doing. Plus, I eliminated a few bugs while I was at it. I miscopied $File::Find::dir to $File::Find::name which is why you got the error.

find folders with no further subfolders in perl

How do I find, in a given path, all folders with no further subfolders? They may contain files but no further folders.
For example, given the following directory structure:
time/aa/
time/aa/bb
time/aa/bb/something/*
time/aa/bc
time/aa/bc/anything/*
time/aa/bc/everything/*
time/ab/
time/ab/cc
time/ab/cc/here/*
time/ab/cc/there/*
time/ab/cd
time/ab/cd/everywhere/*
time/ac/
The output of find(time) should be as follows:
time/aa/bb/something/*
time/aa/bc/anything/*
time/aa/bc/everything/*
time/ab/cc/here/*
time/ab/cc/there/*
time/ab/cd/everywhere/*
* above represents files.
Any time you want to write a directory walker, always use the standard File::Find module. When dealing with the filesystem, you have to be able to handle odd corner cases, and naïve implementations rarely do.
The environment provided to the callback (named wanted in the documentation) has three variables that are particularly useful for what you want to do.
$File::Find::dir is the current directory name
$_ is the current filename within that directory
$File::Find::name is the complete pathname to the file
When we find a directory that is not . or .., we record the complete path and delete its parent, which we now know cannot be a leaf directory. At the end, any recorded paths that remain must be leaves because find in File::Find performs a depth-first search.
#! /usr/bin/env perl
use strict;
use warnings;
use File::Find;
#ARGV = (".") unless #ARGV;
my %dirs;
sub wanted {
return unless -d && !/^\.\.?\z/;
++$dirs{$File::Find::name};
delete $dirs{$File::Find::dir};
}
find \&wanted, #ARGV;
print "$_\n" for sort keys %dirs;
You can run it against a subdirectory of the current directory
$ leaf-dirs time
time/aa/bb/something
time/aa/bc/anything
time/aa/bc/everything
time/ab/cc/here
time/ab/cc/there
time/ab/cd/everywhere
or use a full path
$ leaf-dirs /tmp/time
/tmp/time/aa/bb/something
/tmp/time/aa/bc/anything
/tmp/time/aa/bc/everything
/tmp/time/ab/cc/here
/tmp/time/ab/cc/there
/tmp/time/ab/cd/everywhere
or plumb multiple directories in the same invocation.
$ mkdir -p /tmp/foo/bar/baz/quux
$ leaf-dirs /tmp/time /tmp/foo
/tmp/foo/bar/baz/quux
/tmp/time/aa/bb/something
/tmp/time/aa/bc/anything
/tmp/time/aa/bc/everything
/tmp/time/ab/cc/here
/tmp/time/ab/cc/there
/tmp/time/ab/cd/everywhere
Basically, you open the root folder and use following procedure:
sub child_dirs {
my ($directory) = #_;
Open the directory
opendir my $dir, $directory or die $!;
select the files from the files in this directory where the file is a directory
my #subdirs = grep {-d $_ and not m</\.\.?$>} map "$directory/$_", readdir $dir;
# ^-- directory and not . or .. ^-- use full name
If the list of such selected files contains elements,
3.1. then recurse into each such directory,
3.2. else this directory is a "leaf" and it will be appended to the output files.
if (#subdirs) {
return map {child_dirs($_)} #subdirs;
} else {
return "$directory/*";
}
# OR: #subdirs ? map {child_dirs($_)} #subdirs : "$directory/*";
.
}
Example usage:
say $_ for child_dirs("time"); # dir `time' has to be in current directory.
This function will do it. Just call it with your initial path:
sub isChild {
my $folder = shift;
my $isChild = 1;
opendir(my $dh, $folder) || die "can't opendir $folder: $!";
while (readdir($dh)) {
next if (/^\.{1,2}$/); # skip . and ..
if (-d "$folder/$_") {
$isChild = 0;
isChild("$folder/$_");
}
}
closedir $dh;
if ($isChild) { print "$folder\n"; }
}
I tried the readdir way of doing things. Then I stumbled upon this...
use File::Find::Rule;
# find all the subdirectories of a given directory
my #subdirs = File::Find::Rule->directory->in( $directory );
I eliminated any entry matching the initial part of the string and not having some of the leaf entries, from this output.

How to find files/folders recursively in Perl script?

I have a perl script which I have written to search files present in my windows folders, recursively. I enter the search text as the perl script runtime argument to find a file having this text in it's name. The perl script is as below:
use Cwd;
$file1 = #ARGV[0];
##res1 = glob "*test*";
##res1 = glob "$file1*";
#res1 = map { Cwd::abs_path($_) } glob "$file1*";
foreach (#res1)
{
print "$_\n";
}
But this is not searching all the sub-directories recursively. I know glob doesn't match recursively.
So tried using module File::Find and the function find(\&wanted, #directories);
But I got a error saying find() undefined. From what I read from help, I thought find() function is defined by default in Perl installation, with some basic code to find folders/files. Isn't it correct?
Questions is, in the above perl script, how do I search for files/folders recursively?
Second questions, I found that perldoc <module> help does not have examples about using a certain function in that module, which would make it clear.
Can you point to some good help/document/book for using various perl functions from different perl modules with clear examples of usage of those module functions.
Another excellent module to use is File::Find::Rule which hides some of the complexity of File::Find while exposing the same rich functionality.
use File::Find::Rule;
use Cwd;
my $cwd = getcwd();
my $filelist;
sub buildFileIndex {
open ($filelist, ">", "filelist.txt") || die $!;
# File find rule
my $excludeDirs = File::Find::Rule->directory
->name('demo', 'test', 'sample', '3rdParty') # Provide specific list of directories to *not* scan
->prune # don't go into it
->discard; # don't report it
my $includeFiles = File::Find::Rule->file
->name('*.txt', '*.csv'); # search by file extensions
my #files = File::Find::Rule->or( $excludeDirs, $includeFiles )
->in($cwd);
print $filelist map { "$_\n" } #files;
return \$filelist;
}
These two pages are all you need to study:
File::Find documentation
Beginners guide to File::Find
If you don't mind using cpan module, Path::Class can do the work for you:
use Path::Class;
my #files;
dir('.')->recurse(callback => sub {
my $file = shift;
if($file =~ /some text/) {
push #files, $file->absolute->stringify;
}
});
for my $file (#files) {
# ...
}
An alternative would be to use find2perl to create the start of the script for you. It can turn a find command like,
find . -type f -name "*test*" -print
To an equivalent perl script. You just put find2perl instead of find. It uses File::Find under the hood but gets you going quickly.
use 5.010; # Enable 'say' feature
use strict;
use warnings;
use File::Find; # The module for 'find'
find(\&wanted, #ARGV); # #ARGV is the array of directories to find.
sub wanted {
# Do something...
# Some useful variables:
say $_; # File name in each directory
say $File::Find::dir; # the current directory name
say $File::Find::name; # the complete pathname to the file
}
Example for listing driver modules on Linux (Fedora):
use 5.022;
use strict;
use warnings;
use POSIX qw(uname);
use File::Find;
my $kernel_ver = (uname())[2];
my #dir = (
"/lib/modules/$kernel_ver/kernel/drivers"
);
find(\&wanted, #dir);
sub wanted {
say if /.*\.ko\.xz/;
}