I've been looking at the other find and replace questions on Perl, and I'm not sure how exactly to implement this variation in one line. The only difference is that I want to replace two things, one is the original, and one is the replacing a modification of the original string. The code I have in a pm module is:
my $num_args = $#ARGV + 1;
if ($num_args != 2) {
print "\nUsage: pscript.pm path to file [VERSION]\n";
exit;
}
my $version = $ARGV[1];
my $revision = $ARGV[1];
$revision =~ s/.*\.//g;
open ($inHandle, "<", $ARGV[0]) or die $^E;
open ($outHandle, ">", "$ARGV[0].mod") or die $^E;
while(my $line = <$inHandle>)
{
$line =~ s/\<Foo\>(.*)\<\/Foo\>/\<Foo\>$version\<\/Foo\>/;
$line =~ s/\<Bar\>(.*)\<\/Bar\>/\<Bar\>$revision\<\/Bar\>/;
print $outHandle $line;
}
close $inHandle;
close $outHandle;
unlink $ARGV[0];
rename "$ARGV[0].mod", $ARGV[0];
What is different is:
$revision =~ s/.*\.//g;
which turns the version X.X.X.1000 into just 1000, and then uses that for the find and replace.
Can this be done using the
perl -i.bak -p -e 's/old/new/g;' *.config
format?
Try this :
perl -i.bak -pe 's/\d+\.\d+\.\d+\.1000\b/1000/g' *.config
Related
My Perl file generates the text file which usually contains 200 lines. Sometimes it exceeds 200 lines (For example 217 lines). I need to trim off the rest of the lines from the 201st line. I have used the counter method to trim the exceeded lines. Is there any other simple and efficient way to do this?
Code:
#!/usr/bin/perl -w
use strict;
use warnings;
my $filename1="channel.txt";
my $filename2="channel1.txt";
my $fh;
my $fh1;
my $line;
my $line1;
my $count=1;
open $fh, '<', $filename1 or die "Can't open > $filename1: $!";
open $fh1, '>', $filename2 or die "Can't open > $filename2: $!";
while(my $line = <$fh>)
{
chomp $line;
chomp $line1;
if($count<201)
{
print $fh1 "$line\n";
}
$count++;
}
close ($fh1);
close($fh);
I have already mentioned in my comment, this is short version of that comment If you actually trying to trim the file you can use the Perl One Liner instead of writing the whole code
perl -pe 'last if($. == 201);' input.text >result.txt
-p used for process the file line by line an print the output
-e execute flag, to execute the Perl syntax
With Perl script you can do this also
open my $fh,"<","input.txt";
open my $wh,">","result.txt";
print $wh scalar <$fh> for(1..10);
xxfelixxx already gave you the correct answer. I am just changing my earlier posted answer, to clean up your code and to write back to the original file:
use strict;
use warnings;
my #array;
my $filename="channel.txt";
open my $fh, '<', $filename or die "Can't open > $filename: $!";
while( my $line = <$fh> ) {
last if $. > 200;
push #array, $line;
}
close($fh);
open $fh, '>', $filename or die "Can't open > $filename: $!";
print $fh #array;
close($fh);
There is no need to keep your own counter, perl has a special variable $. which keeps track of the input line number. You can simplify your loop like so:
while( chomp( my $line = <$fh> ) ) {
last if $. > 200;
print $fh1 "$line\n";
}
perldoc perlvar - Search for INPUT_LINE_NUMBER.
To write back to the original file: input.txt without using redirection:
perl -pi.tmp -we "last if $.>200;" input.txt
where
-i : opens a temp file and automatically replaces the file to be
edited with the temporary file after processing (the '.tmp'
is the suffix to use for the temp file during processing)
-w : command line flag to 'use warnings'
-p : magic; basically equivalent to coding:
LINE: while (defined $_ = <ARGV>)) {
"your code here"
}
-e : perl code follows this flag (enclosed in double quotes for MSWin32 aficiandos)
I want to replace word "a" to "red" in a.text files. I want to edit the same file so I tried this code but it does not work. Where am I going wrong?
#files=glob("a.txt");
foreach my $file (#files)
{
open(IN,$file) or die $!;
<IN>;
while(<IN>)
{
$_=~s/a/red/g;
print IN $file;
}
close(IN)
}
I'd suggest it's probably easier to use perl in sed mode:
perl -i.bak -p -e 's/a/red/g' *.txt
-i is inplace edit (-i.bak saves the old as .bak - -i without a specifier doesn't create a backup - this is often not a good idea).
-p creates a loop that iterates all the files specified one line at a time ($_), applying whatever code is specified by -e before printing that line. In this case - s/// applies a sed-style patttern replacement to $_, so this runs a search and replace over every .txt file.
Perl uses <ARVG> or <> to do some magic - it checks if you specify files on your command line - if you do, it opens them and iterates them. If you don't, it reads from STDIN.
So you can also do:
somecommand.sh | perl -i.bak -p -e 's/a/red/g'
In your code you are using same filehandle to write which you have used for open the file to reading. Open the same file for write mode and then write.
Always use lexical filehandle and three arguments to open a file. Here is your modified code:
use warnings;
use strict;
my #files = glob("a.txt");
my #data;
foreach my $file (#files)
{
open my $fhin, "<", $file or die $!;
<$fhin>;
while(<$fhin>)
{
$_ =~ s/\ba\b/red/g;
push #data, $_;
}
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw #data;
close $fhw;
}
Here is another way (read whole file in a scalar):
foreach my $file (glob "/path/to/dir/a.txt")
{
#read whole file in a scalar
my $data = do {
local $/ = undef;
open my $fh, "<", $file or die $!;
<$fh>;
};
$data =~ s/\ba\b/red/g; #replace a with red,
#modify the file
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw $data;
close $fhw;
}
My program is trying to search a string from multiple files in a directory. The code searches for single patterns like perl but fails to search a long string like Status Code 1.
Can you please let me know how to search for strings with multiple words?
#!/usr/bin/perl
my #list = `find /home/ad -type f -mtime -1`;
# printf("Lsit is $list[1]\n");
foreach (#list) {
# print("Now is : $_");
open(FILE, $_);
$_ = <FILE>;
close(FILE);
unless ($_ =~ /perl/) { # works, but fails to find string "Status Code 1"
print "found\n";
my $filename = 'report.txt';
open(my $fh, '>>', $filename) or die "Could not open file '$filename' $!";
say $fh "My first report generated by perl";
close $fh;
} # end unless
} # end For
There are a number of problems with your code
You must always use strict and use warnings at the top of every Perl program. There is little point in delcaring anything with my without strict in place
The lines returned by the find command will have a newline at the end which must be removed before Perl can find the files
You should use lexical file handles (my $fh instead of FILE) and the three-parameter form of open as you do with your output file
$_ = <FILE> reads only the first line of the file into $_
unless ($_ =~ /perl/) is inverted logic, and there's no need to specify $_ as it is the default. You should write if ( /perl/ )
You can't use say unless you have use feature 'say' at the top of your program (or use 5.010, which adds all features available in Perl v5.10)
It is also best to avoid using shell commands as Perl is more than able to do anything that you can using command line utilities. In this case -f $file is a test that returns true if the file is a plain file, and -M $file returns the (floating point) number of days since the file's modification time
This is how I would write your program
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
for my $file ( glob '/home/ad/*' ) {
next unless -f $file and int(-M $file) == 1;
open my $fh, '<', $file or die $!;
while ( <$fh> ) {
if ( /perl/ ) {
print "found\n";
my $filename = 'report.txt';
open my $out_fh, '>>', $filename or die "Could not open file '$filename': $!";
say $fh "My first report generated by perl";
close $out_fh;
last;
}
}
}
it should have matched unless $_ contains text in different case.
try this.
unless($_ =~ /Status\s+Code\s+1/i) {
Change
unless ($_ =~ /perl/) {
to:
unless ($_ =~ /(Status Code 1)/) {
I am certain the above works, except it's case sensitive.
Since you question it, I rewrote your script to make more sense of what you're trying to accomplish and implement the above suggestion. Correct me if I am wrong, but you're trying to make a script which matches "Status Code 1" in a bunch of files where last modified within 1 day and print the filename to a text file.
Anyways, below is what I recommend:
#!/usr/bin/perl
use strict;
use warnings;
my $output_file = 'report.txt';
my #list = `find /home/ad -type f -mtime -1`;
foreach my $filename (#list) {
print "PROCESSING: $filename";
open (INCOMING, "<$filename") || die "FATAL: Could not open '$filename' $!";
foreach my $line (<INCOMING>) {
if ($line =~ /(Status Code 1)/) {
open( FILE, ">>$output_file") or die "FATAL: Could not open '$output_file' $!";
print FILE sprintf ("%s\n", $filename);
close(FILE) || die "FATAL: Could not CLOSE '$output_file' $!";
# Bail when we get the first match
last;
}
}
close(INCOMING) || die "FATAL: Could not close '$filename' $!";
}
I'm trying to remove the first line of the output file "bss_concurrent_calls.txt" using perl -pe instead of system & sed -i. The server I'm using is solaris 9 (infact it didn't recognize "sed -i")
open my $file_in, "<", "/export/home/cassi/4.1-15_HPBX/cfg/LicenseCounters.log" or die($!);
open my $file_out, '>', 'bss_concurrent_calls.txt' or die $!;
while( <$file_in> ) {
my #columns = split /\s+/, $_;
print $file_out "$columns[0]\t$columns[2]\n";
}
system('sed "1d" bss_concurrent_calls.txt');
close $file_in;
close $file_out or die $!;
No need to call sed from Perl here (or anywhere else).
perl -ane 'print "$F[0]\t$F[2]\n" unless $. == 1' \
< /export/.../LicenseCounters.log > bss_concurrent_calls.txt
I like #choroba's answer, but if you want to keep your program structure:
use autodie;
open my $file_in, '<', '/export/home/cassi/4.1-15_HPBX/cfg/LicenseCounters.log';
open my $file_out, '>', 'bss_concurrent_calls.txt';
my $line1 = <$file_in>; # read the first line
while (<$file_in>) {
print $file_out join("\t", (split)[0,2]), "\n";
}
close $file_in;
close $file_out;
while( <$file_in> ) {
next unless $i++;
my #columns = split /\s+/, $_;
print $file_out "$columns[0]\t$columns[2]\n";
}
I have a file that looks like this:
I,like
blah...
I want to replace only the first line with 'i,am' to get:
i,am
blah...
These are big files, so this is what I did (based on this):
open(FH, "+< input.txt") or die "FAIL!";
my $header = <FH>;
chop($header);
$header =~ s/I,like/i,am/g;
seek FH, 0, 0; # go back to start of file
printf FH $header;
close FH;
However, I get this when I run it:
i,amke
blah...
I looks like the 'ke' from like is still there. How do I get rid of it?
What I would do is probably something like this:
perl -i -pe 'if ($. == 1) { s/.*/i,am/; }' yourfile.txt
Which will only affect the first line, when the line counter for the current file handle $. is equal to 1. The regex will replace everything except newline. If you need it to match your specific line, you can include that in the if-statement:
perl -i -pe 'if ($. == 1 and /^I,like$/) { s/.*/i,am/; }' yourfile.txt
You can also look into Tie::File, which allows you to treat the file like an array, which means you can simply do $line[0] = "i,am\n". It is mentioned that there may be performance issues with this module, however.
If the replacement has a different length than the original, you cannot use this technique. You can for example create a new file and then rename it to the original name.
open my $IN, '<', 'input.txt' or die $!;
open my $OUT, '>', 'input.new' or die $!;
my $header = <$IN>;
$header =~ s/I,like/i,am/g;
print $OUT $header;
print $OUT $_ while <$IN>; # Just copy the rest.
close $IN;
close $OUT or die $!;
rename 'input.new', 'input.txt' or die $!;
I'd just use Tie::File:
#! /usr/bin/env perl
use common::sense;
use Tie::File;
sub firstline {
tie my #f, 'Tie::File', shift or die $!;
$f[0] = shift;
untie #f;
}
firstline $0, '#! ' . qx(which perl);
Usage:
$ ./example
$ head -2 example
#! /bin/perl
use common::sense;