How do I find, in a given path, all folders with no further subfolders? They may contain files but no further folders.
For example, given the following directory structure:
time/aa/
time/aa/bb
time/aa/bb/something/*
time/aa/bc
time/aa/bc/anything/*
time/aa/bc/everything/*
time/ab/
time/ab/cc
time/ab/cc/here/*
time/ab/cc/there/*
time/ab/cd
time/ab/cd/everywhere/*
time/ac/
The output of find(time) should be as follows:
time/aa/bb/something/*
time/aa/bc/anything/*
time/aa/bc/everything/*
time/ab/cc/here/*
time/ab/cc/there/*
time/ab/cd/everywhere/*
* above represents files.
Any time you want to write a directory walker, always use the standard File::Find module. When dealing with the filesystem, you have to be able to handle odd corner cases, and naïve implementations rarely do.
The environment provided to the callback (named wanted in the documentation) has three variables that are particularly useful for what you want to do.
$File::Find::dir is the current directory name
$_ is the current filename within that directory
$File::Find::name is the complete pathname to the file
When we find a directory that is not . or .., we record the complete path and delete its parent, which we now know cannot be a leaf directory. At the end, any recorded paths that remain must be leaves because find in File::Find performs a depth-first search.
#! /usr/bin/env perl
use strict;
use warnings;
use File::Find;
#ARGV = (".") unless #ARGV;
my %dirs;
sub wanted {
return unless -d && !/^\.\.?\z/;
++$dirs{$File::Find::name};
delete $dirs{$File::Find::dir};
}
find \&wanted, #ARGV;
print "$_\n" for sort keys %dirs;
You can run it against a subdirectory of the current directory
$ leaf-dirs time
time/aa/bb/something
time/aa/bc/anything
time/aa/bc/everything
time/ab/cc/here
time/ab/cc/there
time/ab/cd/everywhere
or use a full path
$ leaf-dirs /tmp/time
/tmp/time/aa/bb/something
/tmp/time/aa/bc/anything
/tmp/time/aa/bc/everything
/tmp/time/ab/cc/here
/tmp/time/ab/cc/there
/tmp/time/ab/cd/everywhere
or plumb multiple directories in the same invocation.
$ mkdir -p /tmp/foo/bar/baz/quux
$ leaf-dirs /tmp/time /tmp/foo
/tmp/foo/bar/baz/quux
/tmp/time/aa/bb/something
/tmp/time/aa/bc/anything
/tmp/time/aa/bc/everything
/tmp/time/ab/cc/here
/tmp/time/ab/cc/there
/tmp/time/ab/cd/everywhere
Basically, you open the root folder and use following procedure:
sub child_dirs {
my ($directory) = #_;
Open the directory
opendir my $dir, $directory or die $!;
select the files from the files in this directory where the file is a directory
my #subdirs = grep {-d $_ and not m</\.\.?$>} map "$directory/$_", readdir $dir;
# ^-- directory and not . or .. ^-- use full name
If the list of such selected files contains elements,
3.1. then recurse into each such directory,
3.2. else this directory is a "leaf" and it will be appended to the output files.
if (#subdirs) {
return map {child_dirs($_)} #subdirs;
} else {
return "$directory/*";
}
# OR: #subdirs ? map {child_dirs($_)} #subdirs : "$directory/*";
.
}
Example usage:
say $_ for child_dirs("time"); # dir `time' has to be in current directory.
This function will do it. Just call it with your initial path:
sub isChild {
my $folder = shift;
my $isChild = 1;
opendir(my $dh, $folder) || die "can't opendir $folder: $!";
while (readdir($dh)) {
next if (/^\.{1,2}$/); # skip . and ..
if (-d "$folder/$_") {
$isChild = 0;
isChild("$folder/$_");
}
}
closedir $dh;
if ($isChild) { print "$folder\n"; }
}
I tried the readdir way of doing things. Then I stumbled upon this...
use File::Find::Rule;
# find all the subdirectories of a given directory
my #subdirs = File::Find::Rule->directory->in( $directory );
I eliminated any entry matching the initial part of the string and not having some of the leaf entries, from this output.
Related
I have multiple subdirectories and within each of them I have different number of txt files. I am trying to read each of the txt files into an array from each subdirectory. Note that each subdirectory has different number of txt files. I struggled to find that somebody did something similar. Does anybody has some suggestion where to look, how to do it or something like this?
I have found some example how it can be done by using server command, but I it fails to do what I want. I am also a bit confused how to name each array although within different subdirectory arrays can have same names like array1, array2, array3..
#!/usr/local/bin/perl
use strict;
use warnings;
use File::Glob;
my $txt;
my #fh;
my #table;
my $table;
for my $txt(glob'*.txt')
{
open my $fh,'<',$txt;
print "$txt\n";
for (my $txt =1 ;$txt <=8; $txt++)
{
open ($fh,"server$txt");
while (<$fh>)
{
chomp;
my #values = split " ",$_;
push #{ "table$txt"},\#values;
print "$table$txt\n";
}
}
}
I can use this bash script to run perl script on all subdirectories:
for i in `ls -d */`;do cd $i; pwd; for j in *txt; do perl ../foo.pl $j; done; cd ../ ; done
I have not tested this, I only typed the code into the window. I am also rusty on my perl and do things in a blunt manner.
Assuming all the text dirs are under one main directory. You open the main directory using opendir. You then read all the entries in the main directory and test to see if the entry is a subdirectory(where the .txt files will be).
Accessing the sub directory, you then you test for the .txt extension using glob and regex which will return an array, pushing the array into the main array, creating an array of arrays. You can look up how to iterate over this structure to get your information.
#all_subdirs;
$main_dir = "C:\main_dir";
my #files;
opendir (DIR,$main_dir) || die $!;
#files = readdir (DIR);
closedir (DIR) || die $!;
$i = 0;
foreach $subdir(#files){
if(-d $sub_dir){ #-d tests for directory
#tmp = glob "$main_dir\$subdir\*.txt";
$all_dirs[$i] = [ #tmp ];
$i++;
}
}
This stores the subdir array as an array reference. To get an array back, you need to dereference it as follows:
$arrayref = $all_subdirs[0];
#an_array = #$array_ref;
You are not clear whether you wish to read the names of the files into an array, or the contents of the files.
In the former case, you may benefit from the File::Find module to collect all filenames in a specific directory and its subdirectories; in the latter case you should use both File::Find and File::Slurp to read the contents.
I received a Perl script which currently reads a list of directories from a text file and stores them in a string vector. I would like to modify it so that it reads the names of all the directories in the current folder, and stores them in the vector. This way, the user doesn't have to modify the input file each time the list of directories in the current folder changes.
I have no knowledge of Perl, apart from the fact that it looks like array indices in Perl start from 0 (as in Python). I have a basic knowledge of bash and Python, but I'd rather not rewrite the script from scratch in Python. It's a long, complex script, and I'm not sure I'd be able to rewrite it in Python. Can you help me? Here is the part of the script which is currently reading the text file:
#!/usr/bin/perl
use Cwd;
.
.
.
open FILES, "<files.txt" or die; # open input file
<FILES> or die; # skip a comment
my $nof = <FILES> or die; # number of directories
<FILES> or die; # skip a comment
my #massflow; # read directories
for (my $i = 0; $i < $nof; $i++){
chomp($massflow[$i] = <FILES>);
}
.
.
.
close(FILES);
PS I think the script is rather self-explanatory, but just to be sure, this piece opens a text file called "files.txt", skips a line, reads the number of directories, skips another line and reads, one name for each line, the names of all the directories in the current folder, as written in "files.txt".
EDIT I wrote this script following #Sobrique suggestion, but it lists also files, not only dirs:
#!/usr/bin/perl
use Cwd;
my #flow = glob ("*");
my $arrSize = #flow;
print $arrSize;
for (my $i = 0; $i < $arrSize; $i++){
print $flow[$i], "\n";
}
It's simpler than you think:
my #list_of_files = glob ("/path/to/files/*");
If you want to filter by a criteria - like 'is it a directory' you can:
my #list_of_dirs = grep { -d } glob "/path/to/dirs/*";
Open directory inside which the sub-directories are with opendir, read its content with readdir. Filter out everything that is not a directory using file test -d, see -X
my $rootdir = 'top-level-directory';
opendir my $dh, "$rootdir" or die "Can't open directory $rootdir: $!";
my #dirlist = grep { -d } map { "$rootdir/$_" } readdir ($dh);
Since readdir returns bare names we need to prepend the path.
You can also get dir like this:
my #dir = `find . -type d`;
perl -e ' use strict; use warnings; use Data::Dumper; my #dir = `find . -type d`; print Dumper(\#dir);'
$VAR1 = [
'.
',
'./.fonts
',
'./.mozilla
',
'./bin
',
'./.ssh
',
'./scripts
'
];
I am new to perl. I have a directory structure. In each directory, I have a log file. I want to grep pattern from that file and do post processing. Right now I am grepping the pattern from those files using unix grep and putting into text file and reading that text file to do post processing, But I want to automate task of reading each file and grepping pattern from that file. In the code below the mdp_cgdis_1102.txt have grepped pattern from directories. I would really appreciate any help
#!usr/bin/perl
use strict;
use warnings;
open FILE, 'mdp_cgdis_1102.txt' or die "Cannot open file $!";
my #array = <FILE>;
my #arr;
my #brr;
foreach my $i (#array){
#arr = split (/\//, $i);
#brr = split (/\:/, $i);
print " $arr[0] --- $brr[2]";
}
It is unclear to me which part of the process needs automating. I'll go by "want to automate reading each file and grepping pattern from that file," whereby you presumably already have a list of files. If you actually need to build the file list as well see the added code below.
One way: pull all patterns from each file and store that in a hash (filename => arrayref-with-patterns)
my %file_pattern;
foreach my $file (#filelist) {
open my $fh, '<', $file or die "Can't open $file: $!";
$file_pattern{$file} = [ grep { /$pattern/ } <$fh> ];
close $fh;
}
The [ ] takes a reference to the list returned by grep, ie. constructs an "anonymous array", and that (reference) is assigned as a value to the $file key.
Now you can process your patterns, per log file
foreach my $filename (sort keys %file_pattern) {
print "Processing log $filename.\n";
my #patterns = #{$file_pattern{$filename}};
# Process the list of patterns in this log file
}
ADDED
In order to build the list of files #filelist used above, from a known list of directories, use core File::Find
module which recursively scans supplied directories and applies supplied subroutines
use File::Find;
find( { wanted => \&process_logs, preprocess => \&select_logs }, #dir_list);
Your subroutine process_logs() is applied to each file/directory that passed preprocessing by the second sub, with its name available as $File::Find::name, and in it you can either populate the hash with patterns-per-log as shown above, or run complete processing as needed.
Your subroutine select_logs() contains code to filter log files from all files in each directory, that File::Find would normally processes, so that process_file() only gets the log files.
Another way would be to use the other invocation
find(\&process_all, #dir_list);
where now the sub process_all() is applied to all entries (files and directories) found and thus this sub itself needs to ensure that it only processes the log files. See linked documentation.
The equivalent of
find ... -name '*.txt' -type f -exec grep ... {} +
is
use File::Find::Rule qw( );
my $base_dir_qfn = ...;
my $re = qr/.../;
my #log_qfns =
File::Find::Rule
->name(qr/\..txt\z/)
->file
->in($base_dir_qfn);
my $success = 1;
for my $log_qfn (#log_qfns) {
open(my $fh, '<', $log_qfn)
or do {
$success = 0;
warn("Can't open log file \"$log_qfn\": $!\n);
next;
};
while (<$fh>) {
print if /$re/;
}
}
exit(1) if !$success;
Use File::Find to traverse the directory.
In a loop go through all the logfiles:
Open the file
read it line by line
For each line, do a regular expression match (
if ($line =~ /pattern/) ) or use
if (index($line, $searchterm) >= 0) if you are looking for a certain static string.
If you find a match, print the line.
close the file
I hope that gives you enough pointers to get started. You will learn more if you find out how to do each of these steps in Perl by yourself (I pointed out the hard ones).
how can i list all files in parent and sub directories for multilpe dirs?
$dir="/home/httpd/cgi-bin/r/met";
opendir(DIR,"/home/httpd/cgi-bin/r/met")||die"error";
while($line=readdir DIR)
{
print"$line\n";
opendir DIR1,"$dir/$line"||die"error";
while($line1=readdir DIR1)
{
print"$line1\n";
}
}
closedir DIR;
closedir DIR1;
Don't do it this way, use File::Find instead.
use strict;
use warnings;
use File::Find;
my $search = "/home/httpd/cgi-bin/r/met";
sub print_file_names {
print $_,"\n";
}
find ( \&print_file_names, $search );
File::Find effectively walks through list of directories and executes a subroutine defined by you for each file or directory found recursively below the starting directory. Before calling your subroutine find (a function exported by File::Find module) by default changes to the directory being scanned and sets the following (global) variables:
$File::Find::dir -- visited directory path relative to the starting directory
$File::Find::name -- full path of the file being visited relative to the starting directory
$_ -- basename of the file being visited (used in my example)
One way to solve your problem would be:
#!/usr/bin/perl
# Usage: ffind [dir1 ...]
use strict; use warnings;
use 5.010; # to be able to use say
use File::Find;
# use current working directory if no command line argument
#ARGV = qw(.) unless #ARGV;
find( sub { say if -f }, #ARGV );
#!/usr/bin/perl -w
use File::Copy;
use strict;
my $i= "0";
my $j= "1";
my $source_directory = $ARGV[$i];
my $target_directory = $ARGV[$j];
#print $source_directory,"\n";
#print $target_directory,"\n";
my#list=process_files ($source_directory);
print "remaninign files\n";
print #list;
# Accepts one argument: the full path to a directory.
# Returns: A list of files that reside in that path.
sub process_files {
my $path = shift;
opendir (DIR, $path)
or die "Unable to open $path: $!";
# We are just chaining the grep and map from
# the previous example.
# You'll see this often, so pay attention ;)
# This is the same as:
# LIST = map(EXP, grep(EXP, readdir()))
my #files =
# Third: Prepend the full path
map { $path . '/' . $_}
# Second: take out '.' and '..'
grep { !/^\.{1,2}$/ }
# First: get all files
readdir (DIR);
closedir (DIR);
for (#files) {
if (-d $_) {
# Add all of the new files from this directory
# (and its subdirectories, and so on... if any)
push #files, process_files ($_);
} else { #print #files,"\n";
# for(#files)
while(#files)
{
my $input= pop #files;
print $input,"\n";
copy($input,$target_directory);
}
}
# NOTE: we're returning the list of files
return #files;
}
}
This basically copies files from source to destination but I need some guidance on how to
copy the directory as well. The main thing to note here is no CPAN modules are allowed except copy, move, and path
Instead of rolling your own directory processing adventure, why not simply use File::Find to go through the directory structure for you.
#! /usr/bin/env perl
use :5.10;
use warnings;
use File::Find;
use File::Path qw(make_path);
use File::Copy;
use Cwd;
# The first two arguments are source and dest
# 'shift' pops those arguments off the front of
# the #ARGV list, and returns what was removed
# I use "cwd" to get the current working directory
# and prepend that to $dest_dir. That way, $dest_dir
# is in correct relationship to my input parameter.
my $source_dir = shift;
my $dest_dir = cwd . "/" . shift;
# I change into my $source_dir, so the $source_dir
# directory isn't in the file name when I find them.
chdir $source_dir
or die qq(Cannot change into "$source_dir");;
find ( sub {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}, ".");
Now, you don't need a process_files subroutine. You let File::Find::find handle recursing the directory for you.
By the way, you could rewrite the find like this which is how you usually see it in the documentation:
find ( \&wanted, ".");
sub wanted {
return unless -f; #We want files only
make_path "$dest_dir/$File::Find::dir"
unless -d "$dest_dir/$File::Find::dir";
copy "$_", "$dest_dir/$File::Find::dir"
or die qq(Can't copy "$File::Find::name" to "$dest_dir/$File::Find::dir");
}
I prefer to embed my wanted subroutine into my find command instead because I think it just looks better. It first of all guarantees that the wanted subroutine is kept with the find command. You don't have to look at two different places to see what's going on.
Also, the find command has a tendency to swallow up your entire program. Imagine where I get a list of files and do some complex processing on them. The entire program can end up in the wanted subroutine. To avoid this, you simply create an array of the files you want to operate on, and then operate on them inside your program:
...
my #file_list;
find ( \&wanted, "$source_dir" );
for my $file ( #file_list ) {
...
}
sub wanted {
return unless -f;
push #file_list, $File::Find::name;
}
I find this a programming abomination. First of all, what is going on with find? It's modifying my #file_list, but how? No where in the find command is #file_list mentioned. What is it doing?
Then at the end of my program is this sub wanted function that is using a variable, #file_list in a global manner. That's bad programming practice.
Embedding my subroutine directly into my find command solves many of these issues:
my #file_list;
find ( sub {
return unless -f;
push #file_list;
}, $source_dir );
for my $file ( #file_list ) {
...
}
This just looks better. I can see that #file_list is being manipulated directly by my find command. Plus, that pesky wanted subroutine has disappeared from the end of my program. Its' the exact same code. It just looks better.
Let's get to what that find command is doing and how it works with the wanted subroutine:
The find command finds each and every file, directory, link, or whatnot located in the directory list you pass to it. With each item it finds in that directory, it passes it to your wanted subroutine for processing. A return leaves the wanted subroutine and allows find to fetch the next item.
Each time the wanted subroutine is called, find sets three variables:
$File::Find::name: The name of the item found with the full path attached to it.
$File::Find::dir: The name of the directory where the item was found.
$_: The name of the item without the directory name.
In Perl, that $_ variable is very special. It's sort of a default variable for many commands. That is, you you execute a command, and don't give it a variable to use, that command will use $_. For example:
print
prints out $_
return if -f;
Is the same as saying this:
if ( -f $_ ) {
return;
}
This for loop:
for ( #file_list ) {
...
}
Is the same as this:
for $_ ( #file_list ) {
...
}
Normally, I avoid the default variable. It's global in scope and it's not always obvious what is being acted upon. However, there are a few circumstances where I'll use it because it really clarifies the program's meaning:
return unless -f;
in my wanted function is very obvious. I exit the wanted subroutine unless I was handed a file. Here's another:
return unless /\.txt$/;
This will exit my wanted function unless the item ends with '.txt'.
I hope this clarifies what my program is doing. Plus, I eliminated a few bugs while I was at it. I miscopied $File::Find::dir to $File::Find::name which is why you got the error.