I have one folder. There are 32 files and 3 directories in that folder. I want to add some lines of text on each file at top. How can I do that?
Use File::Find to find the files. Use Tie::File and unshift to add lines to the top of the file.
TLP already told you some hints how to solve the Problem. But there is always more then one way to do it. Instead of File::Find and Tie::File i would use some more "modern" modules. In these full example i use Path::Class::Rule with an iterative interface instead of an recursive interface that i like more.
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use open ':encoding(UTF-8)';
use open ':std';
use Path::Class;
use Path::Class::Rule;
my $rule = Path::Class::Rule->new->file;
my $iter = $rule->iter(dir('test'));
while ( my $file = $iter->() ) {
print $file->stringify, "\n";
add_line_to_file($file, "Sid was here.\n");
}
# 1: Path::Class::File Object
# 2: The Line
sub add_line_to_file {
my ( $file, $line ) = #_;
# Open File - return IO::File object
my $fh = $file->open('>>') or die "Cannot open $file: $!\n";
# Seek to end
$fh->seek(0, 2);
# Add line
$fh->print($line);
$fh->close;
return;
}
This could work:
perl -pi -e 's/^/my text\n/' * */*
Please try this on copy of your directory to make sure it does what you want.
Related
I'm a newbie in perl scripting. I have 2 files. I want to compare contents line by line and delete the matching ones. if i use a wild card in 1 file to match multiple lines in second file, it should delete multiple matches and write the rest to another file. I got a bit from another mail it does not take care of wild cards
use strict;
use warnings;
$\="\n";
open my $FILE, "<", "file.txt" or die "Can't open file.txt: $!";
my %Set = map {$_ => undef} <$FILE>;
open my $FORBIDDEN, "<", "forbidden.txt" or die "Can't open forbidden.txt: $!";
my %Forbidden = map {$_ => undef} <$FORBIDDEN>;
open my $OUT, '>', 'output' or die $!;
my %Result = %Set; # make a copy
delete $Result{$_} for keys %Forbidden;
print $OUT keys %Result
I'm not sure what you mean with "wild card".
Nevertheless there are many ways to do what you want. Since it's prettier to use some existing modules you can use the List::Compare module available at cpan.
With the following code you use this module to store all the lines contained in the one file (file.txt) but not in the other file (forbidden.txt). So you implicitly match the lines which are equal. This code doesn't delete them from the file, but find them.
Your code would look like:
use strict;
use warnings;
use File::Slurp qw(read_file); #cpan-module
use List::Compare; #cpan-module
chomp( my #a_file = read_file 'file.txt' );
chomp( my #b_file = read_file 'forbidden.txt' );
#here it stores all the lines contained in the 'file.txt'
#but not in the 'forbidden.txt' in an array
my #a_file_only = List::Compare->new( \#a_file, \#b_file )->get_Lonly;
print "$_\n" for #a_file_only;
#here you could write these lines in a new file to store them.
#At this point I just print them out.
the new approach:
foreach my $filter (#b_file){
#a_file = grep{ /${filter}/} #a_file;
}
print Dumper(#a_file);
It will reduce the lines in the #a_file step by step by using each filter.
I'm trying to capture the output of a tail command to a temp file.
here is a sample of my apache access log
Here is what I have tried so far.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp ();
use File::Temp qw/ :seekable /;
chomp($tail = `tail access.log`);
my $tmp = File::Temp->new( UNLINK => 0, SUFFIX => '.dat' );
print $tmp "Some data\n";
print "Filename is $tmp\n";
I'm not sure how I can go about passing the output of $tail to this temporoy file.
Thanks
I would use a different approach for tailing the file. Have a look to File::Tail, I think it will simplify things.
It sounds like all you need is
print $tmp $tail;
But you also need to declare $tail and you probably shouldn't chomp it, so
my $tail = `tail access.log`;
Is classic Perl approach to use the proper filename for the handle?
if(open LOGFILE, 'tail /some/log/file |' and open TAIL, '>/tmp/logtail')
{
print LOGFILE "$_\n" while <TAIL>;
close TAIL and close LOGFILE
}
There is many ways to do this but since you are happy to use modules, you might as well use File::Tail;
use v5.12;
use warnings 'all';
use File::Tail;
my $lines_required = 10;
my $out_file = "output.txt";
open(my $out, '>', $out_file) or die "$out_file: $!\n";
my $tail = File::Tail->new("/some/log/file");
for (1 .. $lines_required) {
print $out $tail->read;
}
close $out;
This sits and monitors the log file until it gets the 10 new lines. If you just want a copy of the last 10 lines as is, the easiest way is to use I/O redirection from the shell: tail /some/log/file > my_copy.txt
I need to write a perl script to read gzipped files from a text file list of their paths and then concatenate them together and output to a new gzipped file. ( I need to do this in perl as it will be implemented in a pipeline)
I am not sure how to accomplish the zcat and concatenation part, as the file sizes would be in Gbs, I need to take care of the storage and run time as well.
So far I can think of it as -
use strict;
use warnings;
use IO::Compress::Gzip qw(gzip $GzipError) ;
#-------check the input file specified-------------#
$num_args = $#ARGV + 1;
if ($num_args != 1) {
print "\nUsage: name.pl Filelist.txt \n";
exit;
$file_list = $ARGV[0];
#-------------Read the file into arrray-------------#
my #fastqc_files; #Array that contains gzipped files
use File::Slurp;
my #fastqc_files = $file_list;
#-------use the zcat over the array contents
my $outputfile = "combined.txt"
open(my $combined_file, '>', $outputfile) or die "Could not open file '$outputfile' $!";
for my $fastqc_file (#fastqc_files) {
open(IN, sprintf("zcat %s |", $fastqc_file))
or die("Can't open pipe from command 'zcat $fastqc_file' : $!\n");
while (<IN>) {
while ( my $line = IN ) {
print $outputfile $line ;
}
}
close(IN);
my $Final_combied_zip = new IO::Compress::Gzip($combined_file);
or die "gzip failed: $GzipError\n";
Somehow I am not able to get it to run. Also if anyone can guide on the correct way to output this zipped file.
Thanks!
You don't need perl for this. You don't even need zcat/gzip as gzipped files are catable:
cat $(cat pathfile) >resultfile
But if you really really need to try to get the extra compression by combining:
zcat $(cat pathfile)|gzip >resultfile
Adding: Also note the very first "related" link on the right, which seems to already answer this very question: How to concat two or more gzip files/streams
Thanks for the replies - the script runs well now -
#!/usr/bin/perl
use strict;
use warnings;
use File::Slurp;
use IO::Compress::Gzip qw(gzip $GzipError);
my #data = read_file('./File_list.txt');
my $out = "./test.txt";
foreach my $data_file (#data)
{
chomp($data_file);
system("zcat $data_file >> $out");
}
my $outzip = "./test.gz";
gzip $out => $outzip;
File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
copy("/root/actual","/root/temp") or die "Copy failed: $!";
open(FILE, "</root/temp") || die "File not found";
my #lines = <FILE>;
close(FILE);
my #newlines;
foreach(#lines) {
$_ =~ s/$aref1[0]/$profile_name/;
push(#newlines,$_);
}
open(FILE, ">/root/actual") || die "File not found";
print FILE #newlines;
close(FILE);
File "/root/actual" is not getting over written with content of "/root/temp" via perl script. If manually edited "/root/actual" is getting modified.
Do you mean that /root/temp isn't being replaced by /root/actual? Or is /root/temp being modified as you wish, but it's not copying over /root/acutual at the end of your program?
I suggest that you read up on modern Perl programming practices. You need to have use warnings; and use strict; in your program. In fact, many people on this forum won't bother answering Perl questions unless use strict; and use warnings; are used.
Where is $aref1[0] coming from? I don't see #aref1 declared anywhere in your program. Or, for that matter $profile_name.
If you're reading in the entire file into a regular expression, there's no reason to copy it over to a temporary file first.
I rewrote what you had in a more modern syntax:
use strict;
use warnings;
use autodie;
use constant {
FILE_NAME => 'test.txt',
};
my $profile_name = "bar"; #Taking a guess
my #aref1 = qw(foo ??? ??? ???); #Taking a guess
open my $input_fh, "<", FILE_NAME;
my #lines = <$input_fh>;
close $input_fh;
for my $line ( #lines ) {
$line =~ s/$aref1[0]/$profile_name/;
}
open my $output_fh, ">", FILE_NAME;
print ${output_fh} #lines;
close $output_fh;
This works.
Notes:
use autodie; means you don't have to check whether files opened.
When I use a for loop, I can do inplace replacing in an array. Each item is a pointer to that entry in the array.
No need for copy or a temporary file since you're replacing the original file anyway.
I didn't use it here since you didn't, but map { s/$aref1[0]/$profile_name/ } #lines; can replace that for loop. See map.
Is it possible to use perl's move function from File::Copy module to use a wildcard to move a number of common files with the same file extension?
So far, I can only get move to work if I explicitly name the files.
For example, I wanted to do something like so:
my $old_loc = "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
Right now, I can do one file like so:
use strict;
use warnings;
use File::Copy;
my $old_loc = "/share/cust/abc/Mail_2011-10-17.dat";
my $arc_dir = "/share/archive_dir/Mail_2011-10-17.dat";
my $new_loc = $arc_dir;
#archive
print "Moving files to archive...\n";
move ($old_loc, $new_loc) || die "cound not move $old_loc to $new_loc: $!\n";
What I want to do at the end of my perl program, move all these files named *.dat to an archive directory.
You can use Perl's glob operator to get the list of files you need to open:
use strict;
use warnings;
use File::Copy;
my #old_files = glob "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
foreach my $old_file (#old_files)
{
my ($short_file_name) = $old_file =~ m~/(.*?\.dat)$~;
my $new_file = $arc_dir . $short_file_name;
move($old_file, $new_file) or die "Could not move $old_file to $new_file: $!\n";
}
This has the benefit of not relying on a system call, which is unportable, system-dependent, and possibly dangerous.
EDIT: A better way to do this is just to supply the new directory instead of the full new filename. (Sorry for not thinking of this earlier!)
move($old_file, $arc_dir) or die "Could not move $old_file to $new_file: $!\n";
# Probably a good idea to make sure $arc_dir ends with a '/' character, just in case
From the File::Copy documentation:
If the destination already exists and is a directory, and the source
is not a directory, then the source file will be renamed into the
directory specified by the destination.
use strict;
use warnings;
use File::Copy;
my $old_loc = "/share/cust/abc/*.dat";
my $arc_dir = "/share/archive_dir/";
for my $file (glob $old_loc) {
move ($file, $arc_dir) or die $!;
}
You might have better luck with the system function (although you must be careful with it).
print system("mv -v /share/cust/abc/*.dat /share/archive_dir/");