Creating a script to delete old log files, that allows me to pass directory/age/regex arguments - perl

I'm trying to create a PERL script to delete old log files. One of the key things I want the script to be able to do is allow me to pass arguments for directory, name of the file (such as test.log-*), and age of the file.
It's been a while since I've used PERL and I'm not that great anyway, so I'd appreciate some help. I'm also not terribly familiar with the getopt::long module. Here's what I'm thinking so far, and while I'm sure it's not correct, please give me any feedback that might assist.
I want to run the script along the lines of "script.pl --dir /release/logs --type test.log-* --days 7"
#!/usr/perl
use strict;
use warnings;
use Data::Dumper;
use Getopt::Long;
my $file;
my ($dir,$type,$days);
GetOptions( 'dir' => \$dir,
'type' => \$type,
'days' => \$days);
foreach my $file (<$dir/$type>){
if (-M $file < $days) {
print "\n Deleting log more than '$days' old:".$file;
unlink $file;
# or die "\n Failed to remove $file";
}
}
exit;

If you insist on using Perl, look into File::Find & friends. Though if you're on a *nix box you should probably be aware of find(1) for tasks this common.
try: find /release/logs -name test.log-\* -mtime +7 -delete
If you want to test it out 1st, leave off the -delete flag & it will just print a list of the files it would have otherwise deleted.

Related

Perl search for a particular file extension in folder and sub folder

I have a folder which has over 1500 files scattered around in different sub-folders with extension .fna. I was wondering if there is a simple way in Perl to extract all these files and store them in a different location?
As File::Find is recommended everywhere, let me add that there are other, sometimes nicer, options, like https://metacpan.org/pod/Path::Iterator::Rule or Path::Class traverse function.
Which OS are you using? If it's Windows, I think a simple xcopy command would be a lot easier. Open a console window and type "xcopy /?" to get the info on this command. It should be something simple like:
xcopy directory1/*.fna directory2 /s
use File::Find;
my #files;
find(\&search, '/some/path/*.fna');
doSomethingWith(#files);
exit;
sub search {
push #files, $File::Find::name;
return;
}
Without much more information to go on, you don't need a perl script to do something as easy as this.
Here's a *nix one-liner
find /source/dir -name "*.fna" -exec mv -t /target/dir '{}' \+ -print
sorry for the late response. I was away for a conference. Here is my code which seem to work fine so far.
use strict;
use warnings;
use Cwd;
use FileHandle;
open my $out, ">>results7.txt" or die;
my $parent = "/home/denis/Denis_data/Ordered species";
my ($par_dir, $sub_dir);
opendir($par_dir, $parent);
while (my $sub_folders = readdir($par_dir)) {
next if ($sub_folders =~ /^..?$/); # skip . and ..
my $path = $parent . '/' . $sub_folders;
#my $path = $sub_folders;
next unless (-d $path); # skip anything that isn't a directory
chdir($path) or die;
system 'perl batch_hmm.pl';
print $out $path."\n";
#chdir('..') or die;
#closedir($sub_dir);
}
closedir($par_dir);
I will also try the File::Finder option. The above one looks quite messy.

how to create a script from a perl script which will use bash features to copy a directory structure

hi i have written a perl script which copies all the entire directory structure from source to destination and then i had to create a restore script from the perl script which will undo what the perl script has done that is create a script(shell) which can use bash features to restore the contents from destination back to source i m struggling to find the correct function or command which can copy recursively (not an requirement) but i want exactly the same structure as it was before
Below is the way i m trying to create a file called restore to do the restoration process
i m particularly looking for algorithm.
Also restore will restore the structure to a command line directory input if it is supplied if not You can assume the default input supplied to perl script
$source
$target
in this case we would wanna copy from target to source
So we have two different parts in one script.
1 which will copy from source to destination.
2 it will create a script file which will undo what part 1 has done
i hope this makes it very clear
unless(open FILE, '>'."$source/$file")
{
# Die with error message
# if we can't open it.
die "\nUnable to create $file\n";
}
# Write some text to the file.
print FILE "#!/bin/sh\n";
print FILE "$1=$target;\n";
print FILE "cp -r \n";
# close the file.
close FILE;
# here we change the permissions of the file
chmod 0755, "$source/$file";
The last problem i have is i couldn't get $1 in my restore file as it refers to a some variable in perl
but i need this for getting command line input when i run restore as $0 = ./restore $1=/home/xubuntu/User
First off, the standard way in Perl for doing this:
unless(open FILE, '>'."$source/$file") {
die "\nUnable to create $file\n";
}
is to use the or statement:
open my $file_fh, ">", "$source/$file"
or die "Unable to create "$file"";
It's just easier to understand.
A more modern way would be use autodie; which will handle all IO problems when opening or writing to files.
use strict;
use warnings;
use autodie;
open my $file_fh, '>', "$source/$file";
You should look at the Perl Modules File::Find, File::Basename, and File::Copy for copying files and directories:
use File::Find;
use File::Basename;
my #file_list;
find ( sub {
return unless -f;
push #file_list, $File::Find::name;
},
$directory );
Now, #file_list will contain all the files in $directory.
for my $file ( #file_list ) {
my $directory = dirname $file;
mkdir $directory unless -d $directory;
copy $file, ...;
}
Note that autodie will also terminate your program if the mkdir or copy commands fail.
I didn't fill in the copy command because where you want to copy and how may differ. Also you might prefer use File::Copy qw(cp); and then use cp instead of copy in your program. The copy command will create a file with default permissions while the cp command will copy the permissions.
You didn't explain why you wanted a bash shell command. I suspect you wanted to use it for the directory copy, but you can do that in Perl anyway. If you still need to create a shell script, the easiest way is via the :
print {$file_fh} << END_OF_SHELL_SCRIPT;
Your shell script goes here
and it can contain as many lines as you need.
Since there are no quotes around `END_OF_SHELL_SCRIPT`,
Perl variables will be interpolated
This is the last line. The END_OF_SHELL_SCRIPT marks the end
END_OF_SHELL_SCRIPT
close $file_fh;
See Here-docs in Perldoc.
First, I see that you want to make a copy-script - because if you only need to copy files, you can use:
system("cp -r /sourcepath /targetpath");
Second, if you need to copy subfolders, you can use -r switch, can't you?

How to call script within subfolders

I have a script that I'm using to remove duplicate Calendar entries. The root mail folder contains folders with each folder being firstname_lastname, then beneath each other is /Calendar/#msgs/.
As of now, I'm running script manually by going to the users' folder and starting the script /Users/Documents/duplicates/dups.pl . --killdups`
Is there a way that I could easily have it loops through all the users mail folders and look in the respective /Calendar/#msgs/ folder and run the script?
There are a couple ways you can go with this, depending on what you want to do.
First, you can make your script search every folder under it's starting directory. You don't specify anything on the command line.
use File::Spec::Functions qw(catfile);
my #users = glob( '/Users/*' );
foreach my $user ( #users ) { # $user looks like /Users/Buster
my $calendar_dir = catfile( $user, 'Calendar', '#msgs' );
...
}
You could also use opendir to get the list of users so you get back one directory at a time:
opendir my $dh, '/Users' or die ...;
while( my $user = readdir $dh ) {
next if $user =~ /^\.\.?\z/; # and anything else you want to skip
... # do the cool stuff
}
Second, you can make it search selected folders. Suppose that you are in your home directory. To kill the duplicates for the particular users, you'd call your script with those user's names:
dups.pl --killdups Buster Mimi Roscoe
To go through all users, maybe something like this (it almost looks like you are on MacOS X, but not quite, so I'm not sure which path you need), using a command-line glob:
dups.pl --killdups /Users/*
The solution looks similar, but you take the users from #ARGV instead of using a glob:
foreach my $user ( #ARGV ) {
...
}
That should be enough to get you started. You'll have to integrate this with the rest of your script and fix up the paths in each case to be what you need, but that's just simple string manipulation (or even simpler than that with File::Spec.
Pass in the folders it should look at on the command line. The arguments will be in #ARGV, you just loop over it.
Edit: Maybe you prefer an elegant Perl solution ?
#!/usr/bin/perl -w
# CC-by Cedric 'levif' Le Dillau.
use File::Find;
#ARGV = qw(.) unless #ARGV;
find sub { apply_to_folder($File::Find::name) if -d }, #ARGV;
sub apply_to_folder {
my $folder = shift;
printf "folder: %s\n", $folder;
}
Then, yourapply_to_folder() function can be whatever you want.
Note that replacing -d by -f or -f && -x can change the filtering feature.
(help can be found with perldoc -f -X)
Older proposition was:
Try using:
$ find "/Calendar/#msgs/" -type d -exec dups.pl "{}" --killdups \;
Or perl's opendir()/readdir() functions:
$ perldoc -f opendir

How do I run a Perl script on multiple input files with the same extension?

How do I run a Perl script on multiple input files with the same extension?
perl scriptname.pl file.aspx
I'm looking to have it run for all aspx files in the current directory
Thanks!
In your Perl file,
my #files = <*.aspx>;
for $file (#files) {
# do something.
}
The <*.aspx> is called a glob.
you can pass those files to perl with wildcard
in your script
foreach (#ARGV){
print "file: $_\n";
# open your file here...
#..do something
# close your file
}
on command line
$ perl myscript.pl *.aspx
You can use glob explicitly, to use shell parameters without depending to much on the shell behaviour.
for my $file ( map {glob($_)} #ARGV ) {
print $file, "\n";
};
You may need to control the possibility of a filename duplicate with more than one parameter expanded.
For a simple one-liner with -n or -p, you want
perl -i~ -pe 's/foo/bar/' *.aspx
The -i~ says to modify each target file in place, and leave the original as a backup with an ~ suffix added to the file name. (Omit the suffix to not leave a backup. But if you are still learning or experimenting, that's a bad idea; removing the backups when you're done is a much smaller hassle than restoring the originals from a backup if you mess something up.)
If your Perl code is too complex for a one-liner (or just useful enough to be reusable) obviously replace -e '# your code here' with scriptname.pl ... though then maybe refactor scriptname.pl so that it accepts a list of file name arguments, and simply use scriptname.pl *.aspx to run it on all *.aspx files in the current directory.
If you need to recurse a directory structure and find all files with a particular naming pattern, the find utility is useful.
find . -name '*.aspx' -exec perl -pi~ -e 's/foo/bar/' {} +
If your find does not support -exec ... + try with -exec ... \; though it will be slower and launch more processes (one per file you find instead of as few as possible to process all the files).
To only scan some directories, replace . (which names the current directory) with a space-separated list of the directories to examine, or even use find to find the directories themselves (and then perhaps explore -execdir for doing something in each directory that find selects with your complex, intricate, business-critical, maybe secret list of find option predicates).
Maybe also explore find2perl to do this directory recursion natively in Perl.
If you are on Linux machine, you could try something like this.
for i in `ls /tmp/*.aspx`; do perl scriptname.pl $i; done
For example to handle perl scriptname.pl *.aspx *.asp
In linux: The shell expands wildcards, so the perl can simply be
for (#ARGV) {
operation($_); # do something with each file
}
Windows doesn't expand wildcards so expand the wildcards in each argument in perl as follows. The for loop then processes each file in the same way as above
for (map {glob} #ARGV) {
operation($_); # do something with each file
}
For example, this will print the expanded list under Windows
print "$_\n" for(map {glob} #ARGV);
You can also pass the path where you have your aspx files and read them one by one.
#!/usr/bin/perl -w
use strict;
my $path = shift;
my #files = split/\n/, `ls *.aspx`;
foreach my $file (#files) {
do something...
}

How do I get a directory listing in Perl? [duplicate]

This question already has answers here:
How do I read in the contents of a directory in Perl?
(9 answers)
Closed 7 years ago.
I would like to execute ls in a Perl program as part of a CGI script. For this I used exec(ls), but this does not return from the exec call.
Is there a better way to get a listing of a directory in Perl?
Exec doesn't return at all. If you wanted that, use system.
If you just want to read a directory, open/read/close-dir may be more appropriate.
opendir my($dh), $dirname or die "Couldn't open dir '$dirname': $!";
my #files = readdir $dh;
closedir $dh;
#print files...
Everyone else seems stuck on the exec portion of the question.
If you want a directory listing, use Perl's built-in glob or opendir. You don't need a separate process.
exec does not give control back to the perl program.
system will, but it does not return the results of an ls, it returns a status code.
tick marks `` will give you the output of our command, but is considered by some as unsafe.
Use the built in dir functions.
opendir, readdir, and so on.
http://perldoc.perl.org/functions/opendir.html
http://perldoc.perl.org/functions/readdir.html
In order to get the output of a system command you need to use backticks.
$listing = `ls`;
However, Perl is good in dealing with directories for itself. I'd recommend using File::Find::Rule.
Yet another example:
chdir $dir or die "Cannot chroot to $dir: $!\n";
my #files = glob("*.txt");
Use Perl Globbing:
my $dir = </dir/path/*>
EDIT: Whoops! I thought you just wanted a listing of the directories... remove the 'directory' call to make this script do what you want it to...
Playing with filehandles is the wrong way to go in my opinion. The following is an example of using File::Find::Rule to find all the directories in a specified directory. It may seem like over kill for what you're doing, but later down the line it may be worth it.
First, my one line solution:
File::Find::Rule->maxdepth(1)->directory->in($base_dir);
Now a more drawn out version with comments. If you have File::Find::Rule installed you should be able to run this no problem. Don't fear the CPAN.
#!/usr/bin/perl
use strict;
use warnings;
# See http://search.cpan.org/~rclamp/File-Find-Rule-0.32/README
use File::Find::Rule;
# If a base directory was not past to the script, assume current working director
my $base_dir = shift // '.';
my $find_rule = File::Find::Rule->new;
# Do not descend past the first level
$find_rule->maxdepth(1);
# Only return directories
$find_rule->directory;
# Apply the rule and retrieve the subdirectories
my #sub_dirs = $find_rule->in($base_dir);
# Print out the name of each directory on its own line
print join("\n", #sub_dirs);
I would recommend you have a look at IPC::Open3. It allows for far more control over the spawned process than system or the backticks do.
On Linux, I prefer find:
my #files = map { chomp; $_ } `find`;