Perl - locate the latest subdirectory on a network path and copy the entire contents - perl

I want to locate the latest subdirectory on a network path and copy the entire contents of the latest subdirectory into another folder in the network path
We have lot of subfolders under the folder \\10.184.132.202\projectdump I need to sort the sub folders to get into latest folder and copy the entire contents into another folder on \\10.184.132.203\baseline
I am using the below mentioned script i am able to list the latest modified folder under the directory but I am unaware of copying the contents.
use File::stat;
use File::Copy qw(copy);
$dirname = '\\\\10.184.132.202\\projectdump\\Testing\\';
$destination = '\\\\10.184.132.203\\baseline\\Testing\\';
$timediff=0;
opendir DIR, "$dirname";
while (defined ($sub_dir = readdir(DIR)))
{
if($sub_dir ne "." && $sub_dir ne "..")
{
$diff = time()-stat("$dirname/$sub_dir")->mtime;
if($timediff == 0)
{
$timediff=$diff;
$newest=$sub_dir;
}
if($diff<$timediff)
{
$timediff=$diff;
$newest=$sub_dir;
}
}
}
print $newest,"\n";
open my $in, '<', $newest or die $!;
while (<$in>) {
copy *, $destination; --------> Here i want to copy the entire contents of the $newest to $destination.
}

Use File::Copy::Recursive. This is an optional module, but allows you to copy entire directory trees. Unfortunately, File::Copy::Recursive is not a standard Perl module, but you can install it via the cpan command.
If installing modules is a problem (sometimes it is), you can use the File::Find to go through the directory tree and copy files one at a time.
By the way, you can use forward slashes in Perl for Windows file names, so you don't have to double up on backslashes.

Why don't call a simple shell cmd to find the latest dir?
I think, this will be much simpler in shell...
my $newestdir=`ls -1rt $dirname|tail -n 1`;
in shell:
LATESTDIR=`ls -1rt $dirname|tail -n 1`
cp -r ${LATESTDIR}/* $destination/
Ups, I just realized that you might using Windows...
Get all dirs and their times into a hash then sort that hash reverse order to find the newest one
my ($newest) = sort {$hash{$b} cmp $hash{$a} keys %hash;
then
opendir NDIR, "$newest";
while ($dir=<NDIR>) {
next if $dir eq '.' or $dir eq '..';
copy $dir, $destination;
}

Related

How can I get sub-folder names in the working directory using perl?

I have a few sub-folders in the main folder. My program will do some calculations in each sub-folder. Firstly the code will create the "result" folder in main folder for all calculations. And, for the calculation in each sub-folder I want to create a folder in the "result" folder. But they should have the same name as sub-folder.
My working directory is "/home/abc/Desktop/test". The "test" is my main folder. There are "a", "b" and "c" sub-folders in "test" folder. My code creates the "result" folder in "test" main folder. But it also should create "a", "b" and "c" sub-folders in "result" folder. How can I fix my code?
#!/usr/bin/env perl
use strict;
use warnings;
use File::Path qw/make_path/;
use Cwd;
my $dir = cwd();
opendir (DIR, $dir) or die "Unable to open current directory! $!\n";
my #subdirs = readdir (DIR) or die "Unable to read directory! $!\n";
closedir DIR;
my $result_path = "$dir/results";
make_path("$result_path");
foreach my $subdir ( sort #subdirs ) {
chdir($subdir) or die "Cannot cd to $dir: $!\n";
make_path("$result_path/$subdir");
system("echo '1 0' | program -f data.mol -o $result_path/$subdir outfile.txt");
chdir("..");
}
I don't think File::Find::Rule is a good choice for this problem. The module's speciality is recursively searching directory trees, and here you just want a list of all the directories in a single folder. That can very simply be done with grep -d, glob '*'
Here's a version that uses the File::chdir module as per your previous question. It avoids the need for Cwd and File::Basename, and it allows you to localise the current working directory so that there is no need for chdir '..' at the end of each loop.
use strict;
use warnings;
use File::chdir;
my #folders = grep -d, glob '*';
my $result_path = "$CWD/result";
mkdir $result_path;
for my $folder ( #folders ) {
my $result_folder = "$result_path/$folder";
mkdir $result_folder;
local $CWD = $folder;
system("echo '1 0' | program -f data.mol -o $result_folder/output.txt");
}
File::Find::Rule->directory->in( $dir );
finds all directories recursively down the directory tree with starting point $dir. For each directory it finds, you are taking the basename.
So, when it comes across $dir/test/a, the basename of that is a, and your code goes ahead and creates result/a.
I suspect you do not need to find all the directories in a tree -- but given your jumbled problem description it is not easy to be certain.
Maybe you just want to opendir the directory, readdir all the entries keeping only directories other than . and .., and closedir when you are done instead of traversing the entire tree under $dir.

How to recursively copy with wildcards in perl?

I've modified some script that I've written to now only copy .jpg files.
The script seems to work. It will copy all of the .jpg files from one folder to another but the script is meant to continually loop every X amount of seconds.
If I add a new .jpg file to the folder I'm moving items from after I have already started the script it will not copy over the newly added file. If I stop and restart the script then it will copy the new .jpg file that was added but I want the script to copy items as they are put into the folders and not have to stop and restart the script.
Before I added the glob function trying to only copy .jpg files the script would copy anything in the folder even if it was moved into the folder while the script was still running.
Why is this happening? Any help would be awesome.
Here is my code:
use File::Copy;
use File::Find;
my #source = glob ("C:/sorce/*.jpg");
my $target = q{C:/target};
while (1)
{ sleep (10);
find(
sub {
if (-f) {
print "$File::Find::name -> $target";
copy($File::Find::name, $target)
or die(q{copy failed:} . $!);
}
},
#source
);
}
Your #source array contains a list of file names. It should contain a list of folders to start your search in. So simply change it to:
my $source = "C:/source";
I changed it to a scalar, because it only holds one value. If you want to add more directories at a later point, an array can be used instead. Also, of course, why mix a glob and File::Find? It makes little sense, as File::Find is recursive.
The file checking is then done in the wanted subroutine:
if (-f && /\.jpg$/i)
It won't refresh its list of files if you only glob the list once.
I prefer to use File::Find::Rule, and would use that for each iteration on the directory instead to update the list.
use File::Find::Rule;
my $source_dir = 'C:/source';
my $target_dir = 'C:/target';
while (1) {
sleep 10;
my #files = File::Find::Rule->file()
->name( '*.jpg' )
->in( $source_dir );
for my $file (#files) {
copy $file, $target
or die "Copy failed on $file: $!";
}
}

Moving files into different folders/directories based on their name

I have a directory or folder consisting of hundreds of files. They are named and arranged alphabatically. I want to move the files into directories or folders according to the first character of their name (i.e. files starting with a into one folder, files starting with r into another folder, etc).
Is there a way to do it without using CPAN modules?
Are the files all in that one folder, or are they in subfolders? If they are all in a single folder, you can use opendir to access the directory, and then readdir to read the file names and copy them elsewhere (using File::Copy module's move or copy function.
use strict;
use warnings;
use autodie;
use File::Copy; #Gives you access to the "move" command
use constant {
FROM_DIR => "the.directory.you.want.to.read",
TO_DIR => "the.directory.you want.to.move.the.files.to",
};
#Opens FROM_DIR, ao I can read from it
opendir my $dir, FROM_DIR;
# Loopa through the directory
while (my $file = readdir $dir) {
next if ($file eq "." or $file eq "..");
my $from = FROM_DIR . "/" . "$file";
move $from, TO_DIR;
}
This doesn't do exactly what you want, but it should give you the idea. Basically, I'm using opendir and readdir to read the files in the directory and I'm using move to move them to another directory.
I used the File::Copy module, but this is included in all Perl distributions, so it's not a CPAN module that must be installed.
Use glob(), or the built-in File::Find to build a list of files for each starting letter.

How to traverse Subversion repository to find specific file, and stop searching further down?

I have this problem: given a Subversion repository http://svn/trunk/ I want to search the whole repository to find/list all files named exp.xml (their whole URL). Once the first occurence has been found I want it to stop searching further down the URL. Just to make it clear, here are some fictitious URLs:
http://svn/trunk/pro1/sub-pro-x/exp.xml/sub-pro-x1/exp.xml
http://svn/trunk/pro2/sub-pro-y/pro-y1/exp.xml/sub-pro-y1/exp.xml
http://svn/trunk/pro3/sub-pro-z/exp.xml/sub-pro-z1/exp.xml/sub-proj/exp.xml
The result should be:
http://svn/trunk/pro1/sub-pro-x/exp.xml
http://svn/trunk/pro2/sub-pro-y/pro-y1/exp.xml
http://svn/trunk/pro3/sub-pro-z/exp.xml
Now I already have a solution, but it's not really very efficient because I use grep exp.xml after svn -R list --- has searched the whole repository (30-40 min). In case you want to know, here is the command:
svn list -R http://svn/trunk | grep /exp.xml
So my question is whether it is possible to make any significant speedup to this query? One thing I am thinking of is maybe use some language, preferably Perl, to directly traverse the http:/svn/trunk/ and process all the links, and stop traversing further down when it finds the first exp.xml.
Thanks for your time.
If you want it to be faster, I would try checking out the SVN project and then searching the files on disk. You could perform a search using "find" in the checked-out sandbox (where "." assumes you are in the top directory of your project):
find . -name 'exp.xml'
but, similar to your "grep" solution, I don't think it achieves your "stop searching further" criteria. If you want a Perl script to search for "exp.xml" but stop recursing if it finds a match, try this (takes top level directory as argument):
#!/usr/bin/env perl
use warnings;
use strict;
my #dirs = $ARGV[0];
my #files;
DIR:
while (my $dir = shift #dirs) {
opendir(my $dh, $dir) or die "Couldn't open dir $dir: $!";
my #new_dirs;
while (my $file = readdir($dh)) {
# skip special directories (".", "..", and ".svn")
next if $file =~ /^\./;
# turn file into correct relative path
$file = "$dir/$file";
if (-d $file) {
push #new_dirs, $file;
}
if ($file eq "$dir/exp.xml") {
# if we matched, next outer loop so we don't recurse further
push #files, $file;
next DIR;
}
}
# if we didn't match any files, we need to check sub-dirs
push #dirs, #new_dirs;
}
print "$_\n" for #files;
Use svn ls [URL] or svn ls -R [URL] with your script to list the SVN repository starting at [URL]. See svn ls --help for more info.

How can I sync two directories with Perl?

I have a folder called "Lib" in my drive it contains many files inside and I have a problem that this "Lib" folder is there in many other places in the drive. My Perl script has to copy the contents from folder "Lib" which are latest updated and paste it in the folder "d:\perl\Latest_copy_of_Lib"
For example, I have a Lib folders in d:\functions, d:\abc, and many other places. I want to find the latest copy of each file in those directories. So, if the file d:\functions\foo.txt was last modified on 2009-10-12 and d:\abc\foo.txt was last modified on 2009-10-13, then I want the version in d:\abc to by copied to the target directory.
I have used file::find but it searches in whole dir and copies the contents that are not latest copy.
I think you just described rsync. Unless you have some sort of weird requirements here, I don't think you need to write any code to do this. I certainly wouldn't reach for Perl to do the job you described.
You need to use File::Find to create a hash of files to move. Only put the path to a file in the hash if the file is newer than the path already stored in the hash. Here is a simple implementation. Note, there may be problems on the windows platform, I am not used to using File::Spec to work with files and pathes in a cross platform manner.
#!/usr/bin/perl
use warnings;
use strict;
use File::Find;
use File::Spec;
my %copy;
my #sources = qw{
/Users/cowens/foo/Lib
/Users/cowens/bar/Lib
/Users/cowens/baz/Lib
};
find sub {
my ($volume, $dir, $file) = File::Spec->splitpath($File::Find::name);
my #dirs = File::Spec->splitdir($dir);
my #base = ($volume); #the base directory of the file
for my $dir (#dirs) {
last if $dir eq 'Lib';
push #base, $dir;
}
#the part that is common among the various bases
my #rest = #dirs[$#base .. $#dirs];
my $base = File::Spec->catdir(#base);
my $rest = File::Spec->catfile(#rest, $file);
#if we don't have this file yet, or if the file is newer than the one
#we have
if (not exists $copy{$rest} or (stat $File::Find::name)[9] > $copy{$rest}{mtime}) {
$copy{$rest} = {
mtime => (stat _)[9],
base => $base
};
}
}, #sources;
print "copy\n";
for my $rest (sort keys %copy) {
print "\t$rest from $copy{$rest}{base}\n";
}
If you can standardize on a single location for your libraries, and then use one of the following:
set PERL5LIB Environment variable and add
use lib 'C:\Lib';
or
perl -I C:\Lib myscript
Any of these will give you a single copy of your lib directory that any of your scripts will be able to access.