Use sed in loop to search and replace for multiple files - perl

I have list of files as below:
RF1.lib
RF2.lib
RF3.lib
etc..
In each of the *.lib files I have to replace this string vlib with the name of the file like RF1, RF2 etc.
I am using the sed command to search and replace the string as below:
sed -i -e 's/vlib/RF1/g' RF1.lib
sed -i -e 's/vlib/RF2/g' RF2.lib
But I am having to do this multiple times to search and replace for every file.
Is there a way I can open the files in a loop and use the sed command to do the replacement in each file?

Here is a Perl script which will replace the string in each file with filename:
#!/usr/bin/perl
use warnings;
use strict;
foreach my $file (glob "/path/to/dir/*.lib") #get all the '.lib' files from dir
{
my ($filename) = $file =~ m/([^\/]+)\.lib$/; #get the filename
open my $fh, "<", $file or die $!;
my #data = <$fh>;
close $fh;
foreach my $line (#data)
{
$line =~ s/\bvlib\b/$filename/g; #replace string with filename
}
#modify the file
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw #data;
close $fhw;
}
Edit: As Borodin suggested in comment, reading whole file into scalar:
foreach my $file (glob "/path/to/dir/*.lib") #get all the '.lib' files from dir
{
my ($filename) = $file =~ m/([^\/]+)\.lib$/; #get the filename
#read whole file in a scalar
my $data = do {
local $/ = undef;
open my $fh, "<", $file or die $!;
<$fh>;
};
$data =~ s/\bvlib\b/$filename/g; #replace string with filename
#modify the file
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw $data;
close $fhw;
}

Sure. You can say:
for f1 in RF*.lib: do
j=`basemame $f1 .lib`
sed -i -e "s/vlib/$j/g" $f1
done

This might work for you (GNU sed):
sed -r 's|(.*)\.*|sed -i "s/vlib/\1/g" &|e' fileOfFiles
This evaluates a sed command for each of the files in the file containing files. If you prefer the output of the command can be piped into a shell for the same effect, like so:
sed -r 's|(.*)\.*|sed -i "s/vlib/\1/g" &|' fileOfFiles | sh

Related

Search a word in file and replace in Perl

I want to replace word "a" to "red" in a.text files. I want to edit the same file so I tried this code but it does not work. Where am I going wrong?
#files=glob("a.txt");
foreach my $file (#files)
{
open(IN,$file) or die $!;
<IN>;
while(<IN>)
{
$_=~s/a/red/g;
print IN $file;
}
close(IN)
}
I'd suggest it's probably easier to use perl in sed mode:
perl -i.bak -p -e 's/a/red/g' *.txt
-i is inplace edit (-i.bak saves the old as .bak - -i without a specifier doesn't create a backup - this is often not a good idea).
-p creates a loop that iterates all the files specified one line at a time ($_), applying whatever code is specified by -e before printing that line. In this case - s/// applies a sed-style patttern replacement to $_, so this runs a search and replace over every .txt file.
Perl uses <ARVG> or <> to do some magic - it checks if you specify files on your command line - if you do, it opens them and iterates them. If you don't, it reads from STDIN.
So you can also do:
somecommand.sh | perl -i.bak -p -e 's/a/red/g'
In your code you are using same filehandle to write which you have used for open the file to reading. Open the same file for write mode and then write.
Always use lexical filehandle and three arguments to open a file. Here is your modified code:
use warnings;
use strict;
my #files = glob("a.txt");
my #data;
foreach my $file (#files)
{
open my $fhin, "<", $file or die $!;
<$fhin>;
while(<$fhin>)
{
$_ =~ s/\ba\b/red/g;
push #data, $_;
}
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw #data;
close $fhw;
}
Here is another way (read whole file in a scalar):
foreach my $file (glob "/path/to/dir/a.txt")
{
#read whole file in a scalar
my $data = do {
local $/ = undef;
open my $fh, "<", $file or die $!;
<$fh>;
};
$data =~ s/\ba\b/red/g; #replace a with red,
#modify the file
open my $fhw, ">", $file or die "Couldn't modify file: $!";
print $fhw $data;
close $fhw;
}

Perl find and replace one liner

I've been looking at the other find and replace questions on Perl, and I'm not sure how exactly to implement this variation in one line. The only difference is that I want to replace two things, one is the original, and one is the replacing a modification of the original string. The code I have in a pm module is:
my $num_args = $#ARGV + 1;
if ($num_args != 2) {
print "\nUsage: pscript.pm path to file [VERSION]\n";
exit;
}
my $version = $ARGV[1];
my $revision = $ARGV[1];
$revision =~ s/.*\.//g;
open ($inHandle, "<", $ARGV[0]) or die $^E;
open ($outHandle, ">", "$ARGV[0].mod") or die $^E;
while(my $line = <$inHandle>)
{
$line =~ s/\<Foo\>(.*)\<\/Foo\>/\<Foo\>$version\<\/Foo\>/;
$line =~ s/\<Bar\>(.*)\<\/Bar\>/\<Bar\>$revision\<\/Bar\>/;
print $outHandle $line;
}
close $inHandle;
close $outHandle;
unlink $ARGV[0];
rename "$ARGV[0].mod", $ARGV[0];
What is different is:
$revision =~ s/.*\.//g;
which turns the version X.X.X.1000 into just 1000, and then uses that for the find and replace.
Can this be done using the
perl -i.bak -p -e 's/old/new/g;' *.config
format?
Try this :
perl -i.bak -pe 's/\d+\.\d+\.\d+\.1000\b/1000/g' *.config

read two text files as an argument and display it's contents using perl

I have two text files and I want to read them by passing argument at command line.
Now how to take second file? When I give the second file name command line is not reading. Please suggest.
I have used $ARGV[0] and $ARGV[1] in the code to pass the arguments at command line.
$ ./read.pl file1 file2
Reading file1
Reading file2
$ cat read.pl
#!/usr/bin/perl
use strict;
use warnings;
readFile($_) for #ARGV;
sub readFile {
my $filename = shift;
print "Reading $filename\n";
#OPEN CLOSE stuff here
}
my ($file1, $file2) = #ARGV;
open my $fh1, '<', $file1 or die $!;
open my $fh2, '<', $file2 or die $!;
while (<$fh1>) {
do something with $_
}
while (<$fh2>) {
do something with $_
}
close $fh1;
close $fh2;
Where $_ is the default variable.
run as:
perl readingfile.pl filename1 filename2

Need a Perl script to match a string in all files inside a directory and push matching ones to new folder

I want a Perl script to search in the mentioned directory and find those files
which contains the string ADMITTING DX and push those files to a new folder.
I am new to Perl and was trying this:
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
my $dir = '/usr/share/uci_cmc/uci_new_files/';
my $string = 'ADMITTING DX';
open my $results, '>', '/home/debarshi/Desktop/results.txt'
or die "Unable to open results file: $!";
find(\&printFile, $dir);
sub printFile {
return unless -f and /\.txt$/;
open my $fh, '<',, $_ or do {
warn qq(Unable to open "$File::Find::name" for reading: $!);
return;
};
while ($fh) {
if (/\Q$string/) {
print $results "$File::Find::name\n";
return;
}
}
}
You are reading the lines from the file as:
while ($fh)
which should be
while (<$fh>)
You can really do it with Perl and that's a great way. But there's no any complex text processing in your case so I'd just advise using bash one-liner:
for f in *.txt; do grep 'ADMITTING DX' $f >/dev/null && mv $f /path/to/destination/; done
And if you still need a Perl solution:
perl -e 'for my $f (glob "*.txt") { open F, $f or die $!; while(<F>){ if(/ADMITTING DX/){ rename $f, "/path/to/destination/$f" or die $!; last } close $f; }}'
There are two errors in your code. Firstly you have a superfluous comma in the open call in printFile. It should read
open my $fh, '<', $_ or do { ... };
and secondly you need a call to readline to fetch data from the opened file. You can do this with <$fh>, so the while loop should read
while (<$fh>) { ... }
Apart from that your code is fine

Filehandle open() and the split variable

I am a beginner in Perl.
What I do not understand is the following:
To write a script that can:
Print the lines of the file $source with a comma delimiter.
Print the formatted lines to an output file.
Allow this output file to be specified in command-line.
Code:
my ( $source, $outputSource ) = #ARGV;
open( INPUT, $source ) or die "Unable to open file $source :$!";
Question: I do not understand how one can specify in the command line, upon starting to write the code the text of the output file.
I would rely on redirection operator in the shell instead, such as:
script.pl input.txt > output.txt
Then it is a simple case of doing this:
use strict;
use warnings;
while (<ARGV>) {
s/\n/,/;
print;
}
Then you can even merge several files with script.pl input1.txt input2.txt ... > output_all.txt. Or just do one file at the time, with one argument.
If I understood your question right I hope this example can help.
Program:
use warnings;
use strict;
## Check input and output file as arguments in command line.
die "Usage: perl $0 input-file output-file\n" unless #ARGV == 2;
my ( $source, $output_source ) = #ARGV;
## Open both files, one for reading and other for writing.
open my $input, "<", $source or
die "Unable to open file $source : $!\n";
open my $output, ">", $output_source or
die "Unable to open file $output_source : $!\n";
## Read all file line by line, substitute the end of line with a ',' and print
## to output file.
while ( my $line = <$input> ) {
$line =~ tr/\n/,/;
printf $output "%s", $line;
}
close $input;
close $output;
Execution:
$ perl script.pl infile outfile