i need to copy files with specific extensions. but the problem is there are multiple files with same file name.i dont want to overwrite them and store as copy like file _1,file_2 etc. when i tried using unix command line the files are overwitten though i used cp -n.
is there any way i can do this task using command line or perl ?
the command i used was
find -L . -name "*.txt" -exec cp -n {} -t ~/destination
You can also use the cp --backup=numbered option.
In Perl (untested)
perl -MFile::Copy=cp -e '-e ($n = "~/destination/$_") or cp $_, $n for #ARGV' *.txt
The below perl script recursively finding the files and copying into the destination folders however if already exists it will rename the file as filename_1, filename_2
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use File::Spec::Functions qw'catfile';
use File::Copy qw'move';
#use autodie qw'move';
use File::Basename;
my ($filename);# = 'DUMBFILE';
my $origin = '/home/itadmin/FoldersFind/OriginalFolder';
my $destination = '/home/itadmin/FoldersFind/Destination';
mkdir($destination, 0777);
my ($path);
find(\&wanted, $origin);
sub wanted
{
if(-e $origin)
{
if($File::Find::name=~m/\.(txt|html|xml)$/gs)
{
$filename = basename($File::Find::name);
}
}
$path = "$destination/$filename";
my $cnt;
while(-e $path)
{
$cnt++;
$path = catfile $destination, "$filename.$cnt";
}
move($filename, $path);
}
Input: (Fetching files may be duplicate)
/OriginalFolder/<folders1>/*file
/OriginalFolder/<folders2>/*file
Output: (Renaming)
/Destination/*file_<count> #/Destination/*file_1
/Destination/*file_<count> #/Destination/*file_2
Related
This question already has answers here:
Using Perl to rename files in a directory
(4 answers)
Closed 7 years ago.
I'm trying following code to search for .txt files and rename them while copying it to different directory.
#!/bin/perl
use File::Basename;
#txtfiles = <*/*.txt>;
foreach my $file(#textfiles){
$dir = dirname($file);
$file = basename($file);
$file =~ s/(\d+)/$dir/; //renaming number with $dir
`mkdir -p summary` unless -d summary;
`cp $file summary`;
}
Above code gives error saying no such file to copy however print statement at each line shows correctly (renamed file names)
NOOOOO!
Don't use system commands -- especially since there are Perl commands that you can use.
`mkdir -p summary` unless -d summary;
`cp $file summary`;
Use the Perl commands!
use File::Copy; # Standard Perl Module. USE IT!
...
mkdir 'summary' unless -d 'summary'; # No need for `-p`
copy $file, 'summary';
Here's a revised script:
#! /usr/bin/env perl
use strict;
use warnings;
use feature qw(say);
# use File::Basename; -- No need for this in this version
use File::Copy;
# What files do you want to find?
# Do you want to find the immediate files or the
# files in the subdirectories?
my #text_files = glob('*.txt'); # Immediate files only
mkdir 'summary' if not -d 'summary'; # Make this dir before the loop
for my $file ( #text_files ) {
next unless -f $file; # Make sure this is a file you want to copy!
#
# No need for 'dirname' and 'basename' with '*.txt' glob
#
if ( not copy $file, 'summary' ) { # Check the outcome of this command.
warn qq(Could not copy file "$file" to "summary".);
}
}
Let us know if you need to copy files in subdirectories rather than just in the immediate directory. Maybe you can use make_tree found in File::Path, or the Perl version of find from File::Find. These are standard Perl modules that all Perl installations have.
Addendum
I don't want current directory. The files are resides inside one directory i.e. foo/text_v0001.txt, foo/text_v0002.txt, foo_au/text_v0003.txt, foo_au/text_v0004.txt continues.... I want to replace the numbers with directory name e.g. foo/text_v0001.txt should renamed to text_foo.txt and foo/text_v0002.txt should renamed to text_foo.txt (since in same folder we can't have same name files, we can add part2 and the end of second file i.e. text_fooPart2.txt).
That last part is a doozy, and it's a new requirement too. I need to verify that a file doesn't already exist with the same name, and if it does, I need to make sure that I find the next available name.
If I find that a file already exist, I'll loop incrementing a duplicate file counter until I find a file name that doesn't already exist.
#! /usr/bin/env perl
use strict;
use warnings;
use feature qw(say);
use File::Basename;
use File::Copy;
use File::Glob; # Improved Glob matching.
use constant {
DIRECTORY => 'summary',
};
# What files do you want to find?
# Do you want to find the immediate files or the
# files in the subdirectories?
#
# Let's do a more sophisticated pattern making sure we're catching
# the files we want.
#
my #text_files = glob('*/*.txt'); # Subdirectories only
mkdir DIRECTORY if not -d DIRECTORY; # Make this dir before the loop
for my $file ( #text_files ) {
my $dir_name = dirname $file;
my $file_name = basename $file;
say "DEBUG: On '$file'.";
#
# Let's make sure that the file name matches the expected
# pattern. If the substitution doesn't succeed, we assume
# this file shouldn't be copied, and skip it.
#
# I'm serching for a file that has the suffix '_vxxxx.txt' where
# 'xxxx' is some number. I remove the number and the letter `v`,
# and add in the directory name.
#
if ( not $file_name =~ s/_v(\d+)\.txt$/_$dir_name.txt/ ) {
warn qq("$dir_name/$file_name" has not been copied.");
next;
}
#
# If the name matches, make sure it's a file
#
if ( not -f $file ) {
warn qq("$file" is not a file and wasn't copied.");
next
}
#
# Now make sure file name is unique
#
if ( -f DIRECTORY . "/$file_name" ) { # File name already exists
say qq(DEBUG: Duplicate File '$file_name' detected!);
my $dup_file_counter = 2;
( my $file_no_suffix = $file_name ) =~ s/\.txt$//;
#
# Find a non-matching name
#
for (;;) {
my $new_file_name = $file_no_suffix . "_part_$dup_file_counter.txt";
say "DEBUG: New file name '$new_file_name'";
say qq(DEBUG: if ( not -e #{[DIRECTORY]} . "/$new_file_name" ) { );
if ( not -e DIRECTORY . "/$new_file_name" ) {
$file_name = $new_file_name;
last;
}
else {
$dup_file_counter += 1;
}
}
}
if ( not copy $file, DIRECTORY . "/$file_name" ) { # Check the outcome of this command.
warn qq(Could not copy file "$file" to directory ") . DIRECTORY . qq(".);
}
}
in the loop you are using #textfiles instead of #txtfiles. Use strict
#!/usr/local/bin/perl
use File::Basename;
use strict;
use warnings;
my #txtfiles = glob("*.txt");
foreach my $file(#txtfiles){
my $dir = dirname($file);
$file = basename($file);
$file =~ s/(\d+)/$dir/; # renaming number with $dir
`mkdir -p summary` unless -d "summary";
`cp $file summary`;
}
I am trying to access all sub-folders in the current working directory. And then I want to run a program in each sub-folder. How can I do this? My code gave the following error:
Too many arguments for glob at ./analysis.pl line 13, near ""08")"
BEGIN not safe after errors--compilation aborted at ./analysis.pl line 13.
#!/usr/bin/perl
use File::chdir;
use strict;
use warnings;
use Cwd;
# current working directory
my $dir = cwd();
# subfolders pathway
my #dirs = glob ($dir/*);
# input file for program
my $data="ethanol.txt";
# enter to each subfolder
foreach $dir ( #dirs ) {
chdir($dir) or die "Cannot cd to $dir: $!\n";
# Run for ethanol
system("echo 1 1 | program -o $data");
chdir("..");
}
I think what you actually meant to say was, glob("$dir/*"), but I like using File::Find::Rule for this type of task:
#!/usr/bin/env perl
use strict;
use warnings;
use Cwd;
use File::Find::Rule;
my $dir = cwd();
my #subdirs = File::Find::Rule->directory
->in( $dir );
foreach my $subdir ( #subdirs ) {
# do stuff
}
I notice that you have loaded File::chdir but fon't use it in your program. It can be a very useful module, and is particularly applicable in this situation. It works by defining a magic variable $CWD that evaluates to the current directory when read, and alters the current directory when written to. That means it can replace Cwd and chdir.
In addition, you can use local to localise changes to the working directory so that the original location is restored at the end of a block.
Take a look at this rewrite of your own code that should do what you need.
use strict;
use warnings;
use File::chdir;
my $data = 'ethanol.txt';
while (my $node = glob '*' ) {
next unless -d $node;
local $CWD = $node;
system "echo 1 1 | program -o $data";
}
i'm new in Perl and i'm trying to write a script which gets a path to a tar file, checking if there is txt file in it, and if yes extract the file to output folder. here is my code:
#!/usr/bin/perl
use strict;
use warnings;
use Archive::Tar;
use File::Spec::Functions qw(catdir);
#checking if tar file contain a txt file, if yes extract it to a folder
my $tarPath = 'path/to/tarArchive';
my $tar = Archive::Tar->new($tarPath);
my #files = $tar->list_files;
my $output_dir = 'C:/output/path';
foreach my $file (#files) {
if ( $file =~ /txt$/ ) {
my $extracted_file = catdir( $fileName, $file );
my $extracted_path = catdir( $output_dir, $file );
$tar->extract( $extracted_file, $output_dir );
print $extracted_file;
}
}
exit 0;
when i'm running this script i'm getting an error: no such file in archive.
could you please help me understand what am i doing wrong and what should be the correct command?
The Archive::Tar extract method does not allow you to specify an output directory; it only allows a list of input files. You need to use the extract_file method:
$tar->extract_file($extracted_file, $output_dir);
In my perl script, I have the param $FILE=/etc/sysconfig/network
in which way (in perl) I can cut only the directory and put the directory in $DIR param
in order to get:
$DIR=/etc/sysconfig
(like dirname /etc/sysconfig/network in shell script)
Watch out! dirname() is deliberately dumb to emulate the dirname shell command. It is not so much "give me the directory part of this file path" as "give me all but the last part of this path". Why is that important?
my $dir = "/foo/bar/"; # obviously a directory
print dirname($dir); # prints /foo
This is fine, just so long as you realize that dirname does not return the dirname.
If you want the above to return /foo/bar/ you're better off using File::Spec.
use File::Spec;
my($vol,$dir,$file) = File::Spec->splitpath($path);
use File::Basename;
($name,$path,$suffix) = fileparse($fullname,#suffixlist);
$name = fileparse($fullname,#suffixlist);
$basename = basename($fullname,#suffixlist);
$dirname = dirname($fullname);
Read more about File::Basename in perldoc.
Use the File::Basename core module:
use strict;
use warnings;
use File::Basename;
my $FILE = '/etc/sysconfig/network';
my $DIR = dirname($FILE);
print $DIR, "\n";
This prints out:
/etc/sysconfig
I have a folder called Client which contains many subfolders. I want to create a Perl script to look at each of those subfolders and check for a folder there. If it is there, I want to skip it and move on, if it is not there, I want to create it and do some processing.
How do I go about looping through all of the subfolders and checking for the directory I want? I have found a lot of information on how to get all the files in a folder and/or subfolders, but nothing on checking for a directory within each subfolder.
Augh! Too much complexity in the other answers. The original question doesn't appear to be asking for a recursive traversal. As far as I can see, this is a perfectly sensible solution, and vastly more readable to boot:
foreach my $dir (glob "Client/*") {
next if ! -d $dir; # skip if it's not a directory
next if -d "$dir/subfolder"; # skip if subfolder already exists
mkdir "$dir/subfolder" or die; # create it
do_some_processing(); # do some processing
}
Seriously folks: opendir/readdir? Really?
It's pretty easy once you break it into steps. Get a list of the subdirectories with glob then see which ones don't have the second-level directory. If you are using a File::Find-like module, you are probably doing too much work:
#!perl
use strict;
use warnings;
use File::Spec::Functions;
my $start = 'Clients';
my $subdir = 'already_there';
# #queue is the list of directories you need to process
my #queue = grep { ! -d catfile( $_, $subdir ) } # filter for the second level
grep { -d } # filter for directories
glob catfile( $start, '*' ); # everything below $start
#!/usr/bin/perl
use strict;
use Fcntl qw( :DEFAULT :flock :seek );
use File::Spec;
use IO::Handle;
my $startdir = shift #ARGV || '.';
die "$startdir is not a directory\n"
unless -d $startdir;
my $verify_dir_name = 'MyDir';
my $dh = new IO::Handle;
opendir $dh, $startdir or
die "Cannot open $startdir: $!\n";
while(defined(my $cont = readdir($dh))) {
next
if $cont eq '.' || $cont eq '..';
my $fullpath = File::Spec->catfile($dir, $cont);
next
unless -d $fullpath && -r $fullpath && -w $fullpath;
my $verify_path = File::Spec->catfile($fullpath, $verify_dir_name);
next
if -d $verify_path;
mkdir($verify_path, 0755);
# do whatever other operations you want to $verify_path
}
closedir($dh);
The short answer is use File::FInd.
The long answer is first write a subroutine that validates the existence of the folder and if the folder is not there, create it and then do the processing needed. Then invoke the find method of the File::Find module with a reference to the subroutine and the starting folder to process all the subfolders.