I trying to write a script that will copy files from one folder to another based on the file name(similar). As I got Few thousands text files in a folder. But I try to find few hundreds of files out of thousands files. It's takes a lot of time to search it one by one.
Copy seem like a good idea to use in this and then use for to loop through the list of files that I try to find out of thousands. But Copy need a specified name. The problem is I only have part of the file name.
Example of list of files(Content of the text file):
ABCDEF-A01
ADEWSD-B03
ABDDER-C23
Example of filename:
GGI_1409506_ABCDEF-A01.txt,GGI_ADEWSD-B03.txt,DA_ABDDER-C23_12304.txt
I only got the ABCDEF-A01 instead of the full filename.
Expected result:
Able to search through the folder and copy the files to another location that matched according the list of files (one text files).
Anything that you can share? Info/ans/related posts? Thank you so much!
Try the below code in perl . When running the program pass the arguments for Source Directory path and Destination Directory path along with the list of filename that need to be searched. If destination directory doesn't exist it will create a folder automatically through the program as shown below :
Code:
use strict;
use warnings;
use File::Copy;
my $source = $ARGV[0];
my $destination = $ARGV[1];
my $listFiles = $ARGV[2];
if(-f $destination)
{
print "Already unknown extension of file exists with the same name of directory. So rename the file and run the program";
exit 0;
}
if(-d "$destination")
{
print "Directory where files need to be copied: $destination\n";
}
else
{
print "No Directory found and hence created the directory $destination\n";
mkdir("$destination");
}
opendir DIR, $source or die "cant open dir";
my #files = grep /(.*?)(\.txt)$/,(readdir DIR);
open my $fh, '<', "$listFiles" or die "Cannot open the file names to search $listFiles - $!";
open my $out,'>', "$ARGV[1]\\NoMatch.txt" or die "Cannot write to the file NoMatch.txt - $!";
my #listFileNames = <$fh>;
my #listFiles = ();
foreach my $InputFiles (#files)
{
chomp($InputFiles);
foreach my $list(#listFileNames)
{
chomp($list);
if($InputFiles =~ /$list/isg)
{
print "Files : $InputFiles copying\t";
copy("$InputFiles","$destination");
print "Files : $InputFiles copied\n";
push(#listFiles,$list);
}
}
}
my %seen = ();
my $count = 0;
foreach my $list (#listFiles)
{
$seen{lc($list)} = 1;
#print $list . "\n";
}
foreach my $listnames (#listFileNames)
{
if($seen{lc($listnames)})
{
}
else
{
if($count ==0)
{
print "\nFilenames that did not match the text files are present in the destination folder : NoMatch.txt file " . "\n";
}
print $out "$listnames\n";
$count++;
}
}
close($out);
close($fh);
closedir(DIR);
create a batch file and put it in the source folder, with your list of files you want to copy.
for /f %%f in (list.txt) do robocopy c:\source d:\dest %%f
Hope this helps
#!/usr/bin/perl -w
use strict;
use File::Copy;
my $sorce_direcrtory = qq{};
my $new_directory = "";
opendir(my $dh, $sorce_direcrtory) || die;
while(readdir $dh) {
if($_ =~ /[A..Z]+\-[A..Z]\d+/){
move("$sorce_direcrtory/$_", "$new_directory/$_");
}
}
closedir $dh;
Related
I tried
system("ls > file");
in my Perl script but when I open my file, it is an empty file while my directory has a list of file.
my $dirpath = "./";
my $filepath = "./file";
opendir(DIR, $dirpath) or die("Cannot open directory: $!");
open(OUT, ">$filepath");
foreach( sort readdir(DIR) ){
next if $_ =~ /^\.{1,2}$/; # to ignore "." and ".."
print(OUT "$_\n");
}
close(OUT);
closedir(DIR);
Note that if you have the output file in this directory you are listing, then it will be listed as well since you have to open it before reading the directory.
One wierd behaviour I am observing -- in a perl script , I checked wether a directory exists or not, if it exists - it copies a file to that directory, if it doesnt -then the directory is created followed by the file copy
When I go and check the file manually, the file is present . but when I run the same script again to copy another file using the same process as above, I see that the previous files arent present. For a confirmation , I performed a directory read in the script , it said that dir is empty.
Can anyone please help me in understanding
Please find below a code :
if (-d "/home/foo") {
print "the directory is already created \n";
$i=0;
opendir(DIR, "/home/foo") or die "Cant open /home/foo: $!\n";
#list = readdir(DIR);
foreach $line(#list) {
unless ($line =~ /^[.][.]?\z/) {
$i++;
}
}
if ($i != 0) { print "There is Stuff in here!"; }
else { print "This Dir is Empty!"; }
closedir(DIR);
}
else {
&runcond("mkdir /home/foo");
}
`cp $file /home/foo`; #Copying a file $file in the directory
`cp $file /home/foo`;
You haven't defined $file
I have a text file with a list of individual mnemonics (1000+) in it and a directory that also has page files in it. I want to see how many pages a given mnemonic is on.
below is my code so far..
use strict;
use warnings;
use File::Find ();
my $mnemonics = "path\\path\\mnemonics.txt";
my $pages = "path\\path\\pages\\";
open (INPUT_FILE, $names) or die "Cannot open file $mnemonics\n";
my #mnemonic_list = <INPUT_FILE>;
close (INPUT_FILE);
opendir (DH, $pages);
my #pages_dir = readdir DH;
foreach my $mnemonic (#mnemonic_list) {
foreach my $page (#pages_dir) {
if (-e $mnemonic) {
print "$mnemonic is in the following page: $page";
} else {
print "File does not exist \n";
}
}
}
Basically, where I know that a name exists in a page, it isn't showing me the correct output. I'm getting a lot of "File does not exists" when I know it does.
Also, instead of (-e) I tried using:
if ($name =~ $page)
and that didn't work either..
Please help!
Assuming that you want to search a directory full of text files and print the names of files that contain words from the words in mnemonics.txt, try this:
use strict; use warnings;
my $mnemonics = "path/mnemonics.txt";
my $pages = "path/pages/";
open (INPUT_FILE, $mnemonics) or die "Cannot open file $mnemonics\n";
chomp(my #mnemonic_list = <INPUT_FILE>);
close (INPUT_FILE);
local($/, *FILE); # set "slurp" mode
for my $filename (<$pages*>) {
next if -d "$filename"; # ignore subdirectories
open FILE, "$filename";
binmode(FILE);
$filename =~ s/.+\///; # remove path from filename for output
my $contents = <FILE>; # "slurp" file contents
for my $mnemonic (#mnemonic_list) {
if ($contents =~ /$mnemonic/i) {
print "'$mnemonic' found in file $filename\n";
}
}
close FILE;
}
So I have a program that I want to clean some text files. The program asks for the user to enter the full pathway of a directory containing these text files. From there I want to read the files in the directory, print them to a new file (that is specified by the user), and then clean them in the way I need. I have already written the script to clean the text files.
I ask the user for the directory to use:
chomp ($user_supplied_directory = <STDIN>);
opendir (DIR, $user_supplied_directory);
Then I need to read the directory.
my #dir = readdir DIR;
foreach (#dir) {
Now I am lost.
Any help please?
I'm not certain of what do you want. So, I made some assumptions:
When you say clean the text file, you meant delete the text file
The names of the files you want to write into are formed by a pattern.
So, if I'm right, try something like this:
chomp ($user_supplied_directory = <STDIN>);
opendir (DIR, $user_supplied_directory);
my #dir = readdir DIR;
foreach (#dir) {
next if (($_ eq '.') || ($_ eq '..'));
# Reads the content of the original file
open FILE, $_;
$contents = <FILE>;
close FILE;
# Here you supply the new filename
$new_filename = $_ . ".new";
# Writes the content to the new file
open FILE, '>'.$new_filename;
print FILE $content;
close FILE;
# Deletes the old file
unlink $_;
}
I would suggest that you switch to File::Find. It can be a bit of a challenge in the beginning but it is powerful and cross-platform.
But, to answer your question, try something like:
my #files = readdir DIR;
foreach $file (#files) {
foo($user_supplied_directory/$file);
}
where "foo" is whatever you need to do to the files. A few notes might help:
using "#dir" as the array of files was a bit misleading
the folder name needs to be prepended to the file name to get the right file
it might be convenient to use grep to throw out unwanted files and subfolders, especially ".."
I wrote something today that used readdir. Maybe you can learn something from it. This is just a part of a (somewhat) larger program:
our #Perls = ();
{
my $perl_rx = qr { ^ perl [\d.] + $ }x;
for my $dir (split(/:/, $ENV{PATH})) {
### scanning: $dir
my $relative = ($dir =~ m{^/});
my $dirpath = $relative ? $dir : "$cwd/$dir";
unless (chdir($dirpath)) {
warn "can't cd to $dirpath: $!\n";
next;
}
opendir(my $dot, ".") || next;
while ($_ = readdir($dot)) {
next unless /$perl_rx/o;
### considering: $_
next unless -f;
next unless -x _;
### saving: $_
push #Perls, "$dir/$_";
}
}
}
{
my $two_dots = qr{ [.] .* [.] }x;
if (grep /$two_dots/, #Perls) {
#Perls = grep /$two_dots/, #Perls;
}
}
{
my (%seen, $dev, $ino);
#Perls = grep {
($dev, $ino) = stat $_;
! $seen{$dev, $ino}++;
} #Perls;
}
The crux is push(#Perls, "$dir/$_"): filenames read by readdir are basenames only; they are not full pathnames.
You can do the following, which allows the user to supply their own directory or, if no directory is specified by the user, it defaults to a designated location.
The example shows the use of opendir, readdir, stores all files in the directory in the #files array, and only files that end with '.txt' in the #keys array. The while loop ensures that the full path to the files are stored in the arrays.
This assumes that your "text files" end with the ".txt" suffix. I hope that helps, as I'm not quite sure what's meant by "cleaning the files".
use feature ':5.24';
use File::Copy;
my $dir = shift || "/some/default/directory";
opendir(my $dh, $dir) || die "Can't open $dir: $!";
while ( readdir $dh ) {
push( #files, "$dir/$_");
}
# store ".txt" files in new array
foreach $file ( #files ) {
push( #keys, $file ) if $file =~ /(\S+\.txt\z)/g;
}
# Move files to new location, even if it's across different devices
for ( #keys ) {
move $_, "/some/other/directory/"; || die "Couldn't move files: $!\n";
}
See the perldoc of File::Copy for more info.
I am trying to prefix a string (reference_) to the names of all the *.bmp files in all the directories as well sub-directories. The first time we run the silk script, it will create directories as well subdirectories, and under each subdirectory it will store each mobile application's sceenshot with .bmp extension.
When I run the automated silkscript for second time it will again create the *.bmp files in all the subdirectories. Before running the script for second time I want to prefix all the *.bmp with a string reference_.
For example first_screen.bmp to reference_first_screen.bmp,
I have the directory structure as below:
C:\Image_Repository\BG_Images\second
...
C:\Image_Repository\BG_Images\sixth
having first_screen.bmp and first_screen.bmp files etc...
Could any one help me out?
How can I prefix all the image file names with reference_ string?
When I run the script for second time, the Perl script in silk will take both the images from the sub-directory and compare them both pixel by pixel. I am trying with code below.
Could you please guide me how can I proceed to complete this task.
#!/usr/bin/perl -w
&one;
&two;
sub one {
use Cwd;
my $dir ="C:\\Image_Repository";
#print "$dir\n";
opendir(DIR,"+<$dir") or "die $!\n";
my #dir = readdir DIR;
#$lines=#dir;
delete $dir[-1];
print "$lines\n";
foreach my $item (#dir)
{
print "$item\n";
}
closedir DIR;
}
sub two {
use Cwd;
my $dir1 ="C:\\Image_Repository\\BG_Images";
#print "$dir1\n";
opendir(D,"+<$dir1") or "die $!\n";
my #dire = readdir D;
#$lines=#dire;
delete $dire[-1];
#print "$lines\n";
foreach my $item (#dire)
{
#print "$item\n";
$dir2="C:\\Image_Repository\\BG_Images\\$item";
print $dir2;
opendir(D1,"+<$dir2") or die " $!\n";
my #files=readdir D1;
#print "#files\n";
foreach $one (#files)
{
$one="reference_".$one;
print "$one\n";
#rename $one,Reference_.$one;
}
}
closedir DIR;
}
I tried open call with '+<' mode but I am getting compilation error for the read and write mode.
When I am running this code, it shows the files in BG_images folder with prefixed string but actually it's not updating the files in the sub-directories.
You don't open a directory for writing. Just use opendir without the mode parts of the string:
opendir my($dir), $dirname or die "Could not open $dirname: $!";
However, you don't need that. You can use File::Find to make the list of files you need.
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
use File::Find;
use File::Find::Closures qw(find_regular_files);
use File::Spec::Functions qw(catfile);
my( $wanted, $reporter ) = find_regular_files;
find( $wanted, $ARGV[0] );
my $prefix = 'recursive_';
foreach my $file ( $reporter->() )
{
my $basename = basename( $file );
if( index( $basename, $prefix ) == 0 )
{
print STDERR "$file already has '$prefix'! Skipping.\n";
next;
}
my $new_path = catfile(
dirname( $file ),
"recursive_$basename"
);
unless( rename $file, $new_path )
{
print STDERR "Could not rename $file: $!\n";
next;
}
print $file, "\n";
}
You should probably check out the File::Find module for this - it will make recursing up and down the directory tree simpler.
You should probably be scanning the file names and modifying those that don't start with reference_ so that they do. That may require splitting the file name up into a directory name and a file name and then prefixing the file name part with reference_. That's done with the File::Basename module.
At some point, you need to decide what happens when you run the script the third time. Do the files that already start with reference_ get overwritten, or do the unprefixed files get overwritten, or what?
The reason the files are not being renamed is that the rename operation is commented out. Remember to add use strict; at the top of your script (as well as the -w option which you did use).
If you get a list of files in an array #files (and the names are base names, so you don't have to fiddle with File::Basename), then the loop might look like:
foreach my $one (#files)
{
my $new = "reference_$one";
print "$one --> $new\n";
rename $one, $new or die "failed to rename $one to $new ($!)";
}
With the aid of find utility from coreutils for Windows:
$ find -iname "*.bmp" | perl -wlne"chomp; ($prefix, $basename) = split(m~\/([^/]+)$~, $_); rename($_, join(q(/), ($prefix, q(reference_).$basename))) or warn qq(failed to rename '$_': $!)"