Is it possible to use perl's move function from File::Copy module to use a wildcard to move a number of common files with the same file extension?
So far, I can only get move to work if I explicitly name the files.
For example, I wanted to do something like so:
my $old_loc = "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
Right now, I can do one file like so:
use strict;
use warnings;
use File::Copy;
my $old_loc = "/share/cust/abc/Mail_2011-10-17.dat";
my $arc_dir = "/share/archive_dir/Mail_2011-10-17.dat";
my $new_loc = $arc_dir;
#archive
print "Moving files to archive...\n";
move ($old_loc, $new_loc) || die "cound not move $old_loc to $new_loc: $!\n";
What I want to do at the end of my perl program, move all these files named *.dat to an archive directory.
You can use Perl's glob operator to get the list of files you need to open:
use strict;
use warnings;
use File::Copy;
my #old_files = glob "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
foreach my $old_file (#old_files)
{
my ($short_file_name) = $old_file =~ m~/(.*?\.dat)$~;
my $new_file = $arc_dir . $short_file_name;
move($old_file, $new_file) or die "Could not move $old_file to $new_file: $!\n";
}
This has the benefit of not relying on a system call, which is unportable, system-dependent, and possibly dangerous.
EDIT: A better way to do this is just to supply the new directory instead of the full new filename. (Sorry for not thinking of this earlier!)
move($old_file, $arc_dir) or die "Could not move $old_file to $new_file: $!\n";
# Probably a good idea to make sure $arc_dir ends with a '/' character, just in case
From the File::Copy documentation:
If the destination already exists and is a directory, and the source
is not a directory, then the source file will be renamed into the
directory specified by the destination.
use strict;
use warnings;
use File::Copy;
my $old_loc = "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
for my $file (glob $old_loc) {
move ($file, $arc_dir) or die $!;
}
You might have better luck with the system function (although you must be careful with it).
print system("mv -v /share/cust/abc/*.dat /share/archive_dir/");
Related
I'm a newbie in perl scripting. I have 2 files. I want to compare contents line by line and delete the matching ones. if i use a wild card in 1 file to match multiple lines in second file, it should delete multiple matches and write the rest to another file. I got a bit from another mail it does not take care of wild cards
use strict;
use warnings;
$\="\n";
open my $FILE, "<", "file.txt" or die "Can't open file.txt: $!";
my %Set = map {$_ => undef} <$FILE>;
open my $FORBIDDEN, "<", "forbidden.txt" or die "Can't open forbidden.txt: $!";
my %Forbidden = map {$_ => undef} <$FORBIDDEN>;
open my $OUT, '>', 'output' or die $!;
my %Result = %Set; # make a copy
delete $Result{$_} for keys %Forbidden;
print $OUT keys %Result
I'm not sure what you mean with "wild card".
Nevertheless there are many ways to do what you want. Since it's prettier to use some existing modules you can use the List::Compare module available at cpan.
With the following code you use this module to store all the lines contained in the one file (file.txt) but not in the other file (forbidden.txt). So you implicitly match the lines which are equal. This code doesn't delete them from the file, but find them.
Your code would look like:
use strict;
use warnings;
use File::Slurp qw(read_file); #cpan-module
use List::Compare; #cpan-module
chomp( my #a_file = read_file 'file.txt' );
chomp( my #b_file = read_file 'forbidden.txt' );
#here it stores all the lines contained in the 'file.txt'
#but not in the 'forbidden.txt' in an array
my #a_file_only = List::Compare->new( \#a_file, \#b_file )->get_Lonly;
print "$_\n" for #a_file_only;
#here you could write these lines in a new file to store them.
#At this point I just print them out.
the new approach:
foreach my $filter (#b_file){
#a_file = grep{ /${filter}/} #a_file;
}
print Dumper(#a_file);
It will reduce the lines in the #a_file step by step by using each filter.
I'm trying to capture the output of a tail command to a temp file.
here is a sample of my apache access log
Here is what I have tried so far.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp ();
use File::Temp qw/ :seekable /;
chomp($tail = `tail access.log`);
my $tmp = File::Temp->new( UNLINK => 0, SUFFIX => '.dat' );
print $tmp "Some data\n";
print "Filename is $tmp\n";
I'm not sure how I can go about passing the output of $tail to this temporoy file.
Thanks
I would use a different approach for tailing the file. Have a look to File::Tail, I think it will simplify things.
It sounds like all you need is
print $tmp $tail;
But you also need to declare $tail and you probably shouldn't chomp it, so
my $tail = `tail access.log`;
Is classic Perl approach to use the proper filename for the handle?
if(open LOGFILE, 'tail /some/log/file |' and open TAIL, '>/tmp/logtail')
{
print LOGFILE "$_\n" while <TAIL>;
close TAIL and close LOGFILE
}
There is many ways to do this but since you are happy to use modules, you might as well use File::Tail;
use v5.12;
use warnings 'all';
use File::Tail;
my $lines_required = 10;
my $out_file = "output.txt";
open(my $out, '>', $out_file) or die "$out_file: $!\n";
my $tail = File::Tail->new("/some/log/file");
for (1 .. $lines_required) {
print $out $tail->read;
}
close $out;
This sits and monitors the log file until it gets the 10 new lines. If you just want a copy of the last 10 lines as is, the easiest way is to use I/O redirection from the shell: tail /some/log/file > my_copy.txt
File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
copy("/root/actual","/root/temp") or die "Copy failed: $!";
open(FILE, "</root/temp") || die "File not found";
my #lines = <FILE>;
close(FILE);
my #newlines;
foreach(#lines) {
$_ =~ s/$aref1[0]/$profile_name/;
push(#newlines,$_);
}
open(FILE, ">/root/actual") || die "File not found";
print FILE #newlines;
close(FILE);
File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
Do you mean that /root/temp isn't being replaced by /root/actual? Or is /root/temp being modified as you wish, but it's not copying over /root/acutual at the end of your program?
I suggest that you read up on modern Perl programming practices. You need to have use warnings; and use strict; in your program. In fact, many people on this forum won't bother answering Perl questions unless use strict; and use warnings; are used.
Where is $aref1[0] coming from? I don't see #aref1 declared anywhere in your program. Or, for that matter $profile_name.
If you're reading in the entire file into a regular expression, there's no reason to copy it over to a temporary file first.
I rewrote what you had in a more modern syntax:
use strict;
use warnings;
use autodie;
use constant {
FILE_NAME => 'test.txt',
};
my $profile_name = "bar"; #Taking a guess
my #aref1 = qw(foo ??? ??? ???); #Taking a guess
open my $input_fh, "<", FILE_NAME;
my #lines = <$input_fh>;
close $input_fh;
for my $line ( #lines ) {
$line =~ s/$aref1[0]/$profile_name/;
}
open my $output_fh, ">", FILE_NAME;
print ${output_fh} #lines;
close $output_fh;
This works.
Notes:
use autodie; means you don't have to check whether files opened.
When I use a for loop, I can do inplace replacing in an array. Each item is a pointer to that entry in the array.
No need for copy or a temporary file since you're replacing the original file anyway.
I didn't use it here since you didn't, but map { s/$aref1[0]/$profile_name/ } #lines; can replace that for loop. See map.
I have multiple files that have the extension .tdx.
Currently my program works on individual files using $ARGV[0], however the number of files are growing and I would like to use a wildcard based upon the file extension.
After much research I am at a loss.
I would like to read each file individually so the extract from the file is identified by the user.
#!C:\Perl\bin\perl.exe
use warnings;
use FileHandle;
open my $F_IN, '<', $ARGV[0] or die "Unable to open file: $!\n";
open my $F_OUT, '>', 'output.txt' or die "Unable to open file: $!\n";
while (my $line = $F_IN->getline) {
if ($line =~ /^User/) {
$F_OUT->print($line);
}
if ($line =~ /--FTP/) {
$F_OUT->print($line);
}
if ($line =~ /^ftp:/) {
$F_OUT->print($line);
}
}
close $F_IN;
close $F_OUT;
All the files are in one directory, so I assume I will need to open the directory.
I am just not sure how if I need to build an array of files or build a list and chomp it.
You have many options --
Loop over #ARGV, allowing the user to pass in a list of files
Use glob to pass in a pattern that perl will expand into a list of files (and then loop over that list, as in #1). This can be messy as they have to make sure to quote it so the shell doesn't interpolate it first.
Write some wrapper to call your existing script over and over again.
There's also a variant of the first one, which is to read from <>. This is set to either STDIN, or it'll automatically open the files named in #ARGV. See eof for an example of how to use it.
As an variant of #2, you can pass in a directory name, and use either opendir and readdir to loop over the list (making sure to grab only files with your extension, or at the very least ignore . and ..) or append /* or /*.tdx to it and use glob again.
The glob function can help you. Just try
my #files = glob '*.tdx';
for my $file (#files) {
# Process $file...
}
In list context, glob expands its argument to the list of file names that match the pattern. For details, see glob in perlfunc.
I never got glob to work. What I ended up doing was building an array based on the file extension .tdx. from there I copied the array to a filelist and read from that. What I ended up with is:
#!C:\Perl\bin\perl.exe
use warnings;
use FileHandle;
open my $F_OUT, '>', 'output.txt' or die "Unable to open file: $!\n";
open(FILELIST, "dir /b /s \"%USERPROFILE%\\Documents\\holding\\*.tdx\" |");
#filelist=<FILELIST>;
close(FILELIST);
foreach $file (#filelist)
{
chomp($file);
open my $F_IN, '<', $file or die "Unable to open file: $!\n";
while (my $line = $F_IN->getline)
{
Doing Something
}
close $F_IN;
}
close $F_OUT;
Thank you for your answers they helped in the learning experaince.
If you're on a Windows machine, putting in *.tdx on the command line might not work, nor may glob which historically used the shell's globbing abilities. (It now appears that the built in glob function now uses File::Glob, so that may no longer be an issue).
One thing you can do is not use globs, but allow the user to input the directories and suffixes they want. Then use opendir and readdir to go through the directories yourself.
use strict;
use warnings;
use feature qw(say);
use autodie;
use Getopt::Long; # Why not do it right?
use Pod::Usage; # It's about time to learn about POD documentation
my #suffixes; # Hey, why not let people put in more than one suffix?
my #directories; # Let people put in the directories they want to check
my $help;
GetOptions (
"suffix=s" => \#suffixes,
"directory=s" => \#directories,
"help" => \$help,
) or pod2usage ( -message => "Invalid usage" );
if ( not #suffixes ) {
#suffixes = qw(tdx);
}
if ( not #directories ) {
#directories = qw(.);
}
if ( $help ) {
pod2usage;
}
my $regex = join, "|", #suffixes;
$regex = "\.($regex)$"; # Will equal /\.(foo|bar|txt)$/ if Suffixes are foo, bar, txt
for my $directory ( #directories ) {
opendir my ($dir_fh), $directory; # Autodie will take care of this:
while ( my $file = readdir $dir_fh ) {
next unless -f $file;
next unless $file =~ /$regex/;
... Here be dragons ...
}
}
This will go through all of the directories your user input and then examines each entry. It uses the suffixes your user inputs (With .tdx being the default) to create a regular expression to check against the file name. If the file name matches the regular expression, do whatever you wanted to do with that file.
I have one folder. There are 32 files and 3 directories in that folder. I want to add some lines of text on each file at top. How can I do that?
Use File::Find to find the files. Use Tie::File and unshift to add lines to the top of the file.
TLP already told you some hints how to solve the Problem. But there is always more then one way to do it. Instead of File::Find and Tie::File i would use some more "modern" modules. In these full example i use Path::Class::Rule with an iterative interface instead of an recursive interface that i like more.
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use open ':encoding(UTF-8)';
use open ':std';
use Path::Class;
use Path::Class::Rule;
my $rule = Path::Class::Rule->new->file;
my $iter = $rule->iter(dir('test'));
while ( my $file = $iter->() ) {
print $file->stringify, "\n";
add_line_to_file($file, "Sid was here.\n");
}
# 1: Path::Class::File Object
# 2: The Line
sub add_line_to_file {
my ( $file, $line ) = #_;
# Open File - return IO::File object
my $fh = $file->open('>>') or die "Cannot open $file: $!\n";
# Seek to end
$fh->seek(0, 2);
# Add line
$fh->print($line);
$fh->close;
return;
}
This could work:
perl -pi -e 's/^/my text\n/' * */*
Please try this on copy of your directory to make sure it does what you want.