How do I search for .exe files - perl

Do you guys have an idea on how to search or list down .exe files on the server
I am currently using (or maybe place it in an array)?
I will use this command in my Perl program. Assuming that my program is also located on the said server.
My OS is Linux - Ubuntu if that even matters, just in case. Working in CLI here. =)

As mentioned, It is not clear whether you want '*.exe' files, or executable files.
You can use File::Find::Rule to find all executable files.
my #exe= File::Find::Rule->executable->in( '/'); # all executable files
my #exe= File::Find::Rule->name( '*.exe')->in( '/'); # all .exe files
If you are looking for executable files, you (the user running the script) need to be able to execute the file, so you probably need to run the script as root.
It might take a long time to run to.
If you are looking for .exe files, chances are that your disk is already indexed by locate. So this would be much faster:
my #exe= `locate \.exe | grep '\.exe$'`

Perl to find every file under a specified directory that has a .exe suffix:
#!/usr/bin/perl
use strict;
use File::Spec;
use IO::Handle;
die "Usage: $0 startdir\n"
unless scalar #ARGV == 1;
my $startdir = shift #ARGV;
my #stack;
sub process_file($) {
my $file = shift;
print $file
if $file =~ /\.exe$/io;
}
sub process_dir($) {
my $dir = shift;
my $dh = new IO::Handle;
opendir $dh, $dir or
die "Cannot open $dir: $!\n";
while(defined(my $cont = readdir($dh))) {
next
if $cont eq '.' || $cont eq '..';
my $fullpath = File::Spec->catfile($dir, $cont);
if(-d $fullpath) {
push #stack, $fullpath
if -r $fullpath;
} elsif(-f $fullpath) {
process_file($fullpath);
}
}
closedir($dh);
}
if(-f $startdir) {
process_file($startdir);
} elsif(-d $startdir) {
#stack = ($startdir);
while(scalar(#stack)) {
process_dir(shift(#stack));
}
} else {
die "$startdir is not a file or directory\n";
}

Have a look at File::Find.
Alternatively, if you can come up with a command line to the *nix file command, you can use find2perl to convert that command line to a Perl snippet.

I'll probably be shot down for suggesting this, but you don't have to use modules for a simple task. For example:
#!/usr/bin/perl -w
#array = `find ~ -name '*.exe' -print`;
foreach (#array) {
print;
}
Of course, it will need to have some tweaking for your particular choice of starting directory (here, I used ~ for the home directory)
EDIT: Maybe I should have said until you get the modules installed

to get recursively use
use File::Find;
##cal the function by sending your search dir and type of the file
my #exe_files = &get_files("define root directory" , ".exe");
##now in #exe_files will have all .exe files
sub get_files() {
my ($location,$type) = #_;
my #file_list;
if (defined $type) {
find (sub { my $str = $File::Find::name;
if($str =~ m/$type/g ) {
push #file_list, $File::Find::name ;
}
}, $location);
} else {
find (sub {push #file_list, $File::Find::name }, $location);
}
return (#file_list);
}

Related

Print files and subdirectories of given directory

I am trying to get all files and directories from a given directory but I can't specify what is the type (file/ directory). Nothing is being printed. What I am doing wrong and how to solve it. Here is the code:
sub DoSearch {
my $currNode = shift;
my $currentDir = opendir (my $dirHandler, $currNode->rootDirectory) or die $!;
while (my $node = readdir($dirHandler)) {
if ($node eq '.' or $node eq '..') {
next;
}
print "File: " . $node . "\n" if -f $node;
print "Directory " . $node . "\n" if -d $node;
}
closedir($dirHandler);
}
readdir returns only the node name without any path information. The file test operators will look in the current working directory if no path is specified, and because the current directory isn't $currNode->rootDirectory they won't be found
I suggest you use rel2abs from the File::Spec::Functions core module to combine the node name with the path. You can use string concatenation, but the library function takes care of corner cases like whether the directory ends with a slash
It's also worth pointing out that Perl identifiers are most often in snake_case, and people familiar with the language would thank you for not using capital letters. They should especially be avoided for the first character of an identifier, as names like that are reserved for globals like package names
I think your subroutine should look like this
use File::Spec::Functions 'rel2abs';
sub do_search {
my ($curr_node) = #_;
my $dir = $curr_node->rootDirectory;
opendir my $dh, $dir or die qq{Unable to open directory "$dir": $!};
while ( my $node = readdir $dh ) {
next if $node eq '.' or $node eq '..';
my $fullname = rel2abs($node, $dir);
print "File: $node\n" if -f $fullname;
print "Directory $node\n" if -d $fullname;
}
}
An alternative method is to set the current working directory to the directory being read. That way there is no need to manipulate file paths, but you would need to save and restore the original working directory before and after changing it
The Cwd core module provides getcwd and your code would look like this
use Cwd 'getcwd';
sub do_search {
my ($curr_node) = #_;
my $cwd = getcwd;
chdir $curr_node->rootDirectory or die $!;
opendir my $dh, '.' or die $!;
while ( my $node = readdir $dh ) {
next if $node eq '.' or $node eq '..';
print "File: \n" if -f $node;
print "Directory $node\n" if -d $node;
}
chdir $cwd or die $!;
}
Use this CPAN Module to get all files and subdirectories recursively.
use File::Find;
find(\&getFile, $dir);
my #fileList;
sub getFile{
print $File::Find::name."\n";
# Below lines will print only file name.
#if ($File::Find::name =~ /.*\/(.*)/ && $1 =~ /\./){
#push #fileList, $File::Find::name."\n";
}
Already answered, but sometimes is handy not to care with the implementation details and you could use some CPAN modules for hiding such details.
One of them is the wonderful Path::Tiny module.
Your code could be as:
use 5.014; #strict + feature 'say' + ...
use warnings;
use Path::Tiny;
do_search($_) for #ARGV;
sub do_search {
my $curr_node = path(shift);
for my $node ($curr_node->children) {
say "Directory : $node" if -d $node;
say "Plain File : $node" if -f $node;
}
}
The children method excludes the . and the .. automatically.
You also need understand that the -f test is true only for the real files. So, the above code excludes for example symlinks (whose points to real files), or FIFO files, and so on... Such "files" could be usually opened and read as plain files, therefore somethimes instead of the -f is handy to use the -e && ! -d test (e.g. exists, but not an directory).
The Path::Tiny has some methods for this, e.g. you could write
for my $node ($curr_node->children) {
print "Directory : $node\n" if $node->is_dir;
print "File : $node\n" if $node->is_file;
}
the is_file method is usually DWIM - e.g. does the: -e && ! -d.
Using the Path::Tiny you could also easily extend your function to walk the whole tree using the iterator method:
use 5.014;
use warnings;
use Path::Tiny;
do_search($_) for #ARGV;
sub do_search {
#maybe you need some error-checking here for the existence of the argument or like...
my $iterator = path(shift)->iterator({recurse => 1});
while( my $node = $iterator->() ) {
say "Directory : ", $node->absolute if $node->is_dir;
say "File : ", $node->absolute if $node->is_file;
}
}
The above prints the type for all files and directories recursive down from the given argument...
And so on... the Path::Tiny is really worth to have installed.

List content of a directory except hidden files in Perl

My code displays all files within the directory, But I need it not to display hidden files such as "." and "..".
opendir(D, "/var/spool/postfix/hold/") || die "Can't open directory: $!\n";
while (my $f = readdir(D))
{
print "MailID :$f\n";
}
closedir(D);
It sounds as though you might be wanting to use the glob function rather than readdir:
while (my $f = </var/spool/postfix/hold/*>) {
print "MailID: $f\n";
}
<...> is an alternate way of globbing, you can also just use the function directly:
while (my $f = glob "/var/spool/postfix/hold/*") {
This will automatically skip the hidden files.
Just skip the files you don't want to see:
while (my $f = readdir(D))
{
next if $f eq '.' or $f eq '..';
print "MailID :$f\n";
}
On a Linux system, "hidden" files and folders are those starting with a dot.
It is best to use lexical directory handles (and file handles).
It is also important to always use strict and use warnings at the start of every Perl program you write.
This short program uses a regular expression to check whether each name starts with a dot.
use strict;
use warnings;
opendir my $dh, '/var/spool/postfix/hold' or die "Can't open directory: $!\n";
while ( my $node = readdir($dh) ) {
next if $node =~ /^\./;
print "MailID: $node\n";
}

Delete wildcard in perl

I am a new to Perl, but I thought the following should work. I have the following snippet of a larger perl script
#mylist = ("${my_dir}AA_???_???.DAT", "${my_dir}AA???.DAT");
foreach my $list (#mylist) {
if (-e $list) {
system ("cp ${list} ${my_other_dir}");
}
}
The above snippet is not able to find those wildcards, with AA_???_???.DAT but it does able to find the file name with the wildcard AA???.DAT
I have tried also deleting the files AA??_???.DAT as
unlink(glob(${my_dir}AA_???_???.DAT"))
but the script just hangs up. But it is able to delete files match AA???.DAT using:
unlink(glob("${my_dir}AA???.DAT))
What could be the reasons?
-e $list checks for the existence of files, so return false for both AA_???_???.DAT or AA???.DAT (unless you actually have file named exactly that). It's not true that one works and he other one doesn't.
It's also not true that unlink(glob(${my_dir}AA_???_???.DAT")) hangs. For starters, it doesn't even compile.
I would use the opendir and readdir built-in functions (modified from the documentation example):
opendir(my $dh, $some_dir) || die "can't opendir $some_dir: $!";
#mylist = grep { /^(.AA_..._...\.DAT|AA...\.DAT)$/ && -f "$some_dir/$_" } readdir($dh);
closedir $dh;
Then you can plug in your original code:
foreach my $list (#mylist) {
if (-e $list) {
system ("cp $some_dir/${list} ${my_other_dir}/");
}
}
For directory recursive file operations I really like to use the File::Find CPAN module.
This will traverse through sub directories passing each file to a specified subroutine
to process that file. As an example:
#! /usr/bin/perl
use strict;
use warnings;
use File::Find;
my #dirs='/path/to/dir';
my $my_other_dir='/path/to/otherdir';
find(&process_files, #dirs);
sub process_files {
my($file) = $_;
my($fullpath) = $File::Find::name;
return if($file !~ /^AA_..._...\.DAT$/ and
$file !~ /^AA...\.DAT$/);
system ("cp $fullpath $my_other_dir/");
}

Odd file handling in perl on OS X

I'm very much a perl newbie, so bear with me.
I was looking for a way to recurse through folders in OS X and came across this solution: How to traverse all the files in a directory...
I modified perreal's answer (see code below) slightly so that I could specify the search folder in an argument; i.e. I changed my #dirs = ("."); to my #dirs = ($ARGV[0]);
But for some reason this wouldn't work -- it would open the folder, but would not identify any of the subdirectories as folders, apart from '.' and '..', so it never actually went beyond the specified root.
If I actively specified the folder (e.g. \Volumes\foo\bar) it still doesn't work. But, if I go back to my #dirs = ("."); and then sit in my desired folder (foo\bar) and call the script from its own folder (foo\boo\script.pl) it works fine.
Is this 'expected' behaviour? What am I missing?!
Many thanks,
Mat
use warnings;
use strict;
my #dirs = (".");
my %seen;
while (my $pwd = shift #dirs) {
opendir(DIR,"$pwd") or die "Cannot open $pwd\n";
my #files = readdir(DIR);
closedir(DIR);
foreach my $file (#files) {
if (-d $file and ($file !~ /^\.\.?$/) and !$seen{$file}) {
$seen{$file} = 1;
push #dirs, "$pwd/$file";
}
next if ($file !~ /\.txt$/i);
my $mtime = (stat("$pwd/$file"))[9];
print "$pwd $file $mtime";
print "\n";
}
}
The problem is that you are using the -d operator on the file basename without its path. Perl will look in the current working directory for a directory of that name and return true if it finds one there, when it should be looking in $pwd.
This solution changes $file to always hold the full name of the file or directory, including the path.
use strict;
use warnings;
my #dirs = (shift);
my %seen;
while (my $pwd = shift #dirs) {
opendir DIR, $pwd or die "Cannot open $pwd\n";
my #files = readdir DIR;
closedir DIR;
foreach (#files) {
next if /^\.\.?$/;
my $file = "$pwd/$_";
next if $seen{$file};
if ( -d $file ) {
$seen{$file} = 1;
push #dirs, $file;
}
elsif ( $file =~ /\.txt$/i ) {
my $mtime = (stat $file)[9];
print "$file $mtime\n";
}
}
}
use full path with -d
-d "$pwd/$file"

Recursive Perl detail need help

i think this is a simple problem, but i'm stuck with it for some time now! I need a fresh pair of eyes on this.
The thing is i have this code in perl:
#!c:/Perl/bin/perl
use CGI qw/param/;
use URI::Escape;
print "Content-type: text/html\n\n";
my $directory = param ('directory');
$directory = uri_unescape ($directory);
my #contents;
readDir($directory);
foreach (#contents) {
print "$_\n";
}
#------------------------------------------------------------------------
sub readDir(){
my $dir = shift;
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
next if ($file =~ m/^\./);
if(-d $dir.$file)
{
#print $dir.$file. " ----- DIR\n";
readDir($dir.$file);
}
push #contents, ($dir . $file);
}
closedir(DIR);
}
I've tried to make it recursive. I need to have all the files of all of the directories and subdirectories, with the full path, so that i can open the files in the future.
But my output only returns the files in the current directory and the files in the first directory that it finds. If i have 3 folders inside the directory it only shows the first one.
Ex. of cmd call:
"perl readDir.pl directory=C:/PerlTest/"
Thanks
Avoid wheel reinvention, use CPAN.
use Path::Class::Iterator;
my $it = Path::Class::Iterator->new(
root => $dir,
breadth_first => 0
);
until ($it->done) {
my $f = $it->next;
push #contents, $f;
}
Make sure that you don't let people set $dir to something that will let them look somewhere you don't want them to look.
Your problem is the scope of the directory handle DIR. DIR has global scope so each recursive call to readDir is using the same DIR; so, when you closdir(DIR) and return to the caller, the caller does a readdir on a closed directory handle and everything stops. The solution is to use a local directory handle:
sub readDir {
my ($dir) = #_;
opendir(my $dh, $dir) or die $!;
while(my $file = readdir($dh)) {
next if($file eq '.' || $file eq '..');
my $path = $dir . '/' . $file;
if(-d $path) {
readDir($path);
}
push(#contents, $path);
}
closedir($dh);
}
Also notice that you would be missing a directory separator if (a) it wasn't at the end of $directory or (b) on every recursive call. AFAIK, slashes will be internally converted to backslashes on Windows but you might want to use a path mangling module from CPAN anyway (I only care about Unix systems so I don't have any recommendations).
I'd also recommend that you pass a reference to #contents to readDir rather than leaving it as a global variable, fewer errors and less confusion that way. And don't use parentheses on sub definitions unless you know exactly what they do and what they're for. Some sanity checking and scrubbing on $directory would be a good idea as well.
There are many modules that are available for recursively listing files in a directory.
My favourite is File::Find::Rule
use strict ;
use Data::Dumper ;
use File::Find::Rule ;
my $dir = shift ; # get directory from command line
my #files = File::Find::Rule->in( $dir );
print Dumper( \#files ) ;
Which sends a list of files into an array ( which your program was doing).
$VAR1 = [
'testdir',
'testdir/file1.txt',
'testdir/file2.txt',
'testdir/subdir',
'testdir/subdir/file3.txt'
];
There a loads of other options, like only listing files with particular names. Or you can set it up as an iterator, which is described in How can I use File::Find
How can I use File::Find in Perl?
If you want to stick to modules that come with Perl Core, have a look at File::Find.