Replacing a pattern in file without reading the whole file- Perl - perl

I want to replace a pattern 'good' with 'bad' in my file. Currently what I am doing is:
#!/perl/bin/perl
$filename= "abc.txt";
open my $fh, $filename;
my $text = do { local( $/ ); <$fh> };
close $fh;
$text =~s/good/bad/g;
Is there any way I can do this without reading the whole file??
Edit: Suppose I know that there's only 1 'good' in the file.
P.S.,Hi I am new here. Hope I am doing it correctly.

You could use Tie::File:
#!/usr/bin/perl
use strict;
use Tie::File;
my $filename = ...;
tie my #lines, 'Tie::File', $filename or die;
for (#lines) {
s/good/bad/g and last;
}
Too make sure that not the whole file is slurped in you want to read the lines one-by-one, e.g. using:
#!/usr/bin/perl
use strict;
use Tie::File;
my $filename = ...;
tie my #lines, 'Tie::File', $filename or die;
for(my $i=0; ; $i++) {
last if !defined $lines[$i]; # eof
last if $lines[$i] =~ s/good/bad/g;
}

Related

perl array prints as GLOB(#x#########)

I have a file which contains a list of email addresses which are separated by a semi-colon which is configured much like this (but much larger) :
$ cat email_casper.txt
casper1#foo.com; casper2#foo.com; casper3#foo.com; casper.casper4#foo.com;
#these throw outlook error :
#casper101#foo.com ; casper100#foo.com
#cat /tmp/emailist.txt | tr '\n' '; '
#cat /tmp/emallist.txt | perl -nle 'print /\<(.*)\>/' | sort
I want to break it up on the semicolon - so I suck the whole file into an array supposedly the contents are split on semicolon.
#!/usr/bin/perl
use strict;
use warnings;
my $filename = shift #ARGV ;
open(my $fh, '<', $filename) or die "Could not open file $filename $!";
my #values = split(';', $fh);
foreach my $val (#values) {
print "$val\n";
}
exit 0 ;
But the file awards me with a golb. I just don't know what is going one.
$ ./spliton_semi.pl email_casper.txt
GLOB(0x80070b90)
If I use Data::Dumper I get
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper ;
my $filename = shift #ARGV ;
open(my $fh, '<', $filename) or die "Could not open file $filename $!";
my #values = split(';', $fh);
print Dumper \#values ;
This is what the Dumper returns :
$ ./spliton_semi.pl email_casper.txt
$VAR1 = [
'GLOB(0x80070b90)'
];
You do not "suck the whole file into an array". You don't even attempt to read from the file handle. Instead, you pass the file handle to split. Expecting a string, it stringifies the file handle into GLOB(0x80070b90).
You could read the file into an array of lines as follows:
my #lines = <$fh>;
for my $line ($lines) {
...
}
But it's far simpler to read one line at a time.
while ( my $line = <$fh> ) {
...
}
In fact, there is no reason not to use ARGV here, simplifying your program to the following:
#!/usr/bin/perl
use strict;
use warnings;
use feature qw( say );
while (<>) {
chomp;
say for split /\s*;\s*/, $_;
}
This line
my #values = split(';', $fh);
is not reading from the filehandle like you think it is. You're actually calling split on the filehandle object itself.
You want this:
my $line = <$fh>;
my #values = split(';', $line);
Starting point:
#!/usr/bin/perl
use strict;
use warnings;
open(my $fh, '<', 'dummy.txt')
or die "$!";
my #values;
while (<$fh>) {
chomp;
# original
# push(#values, split(';', $_));
# handle white space
push(#values, split(/\s*;\s*/, $_));
}
close($fh);
foreach my $val (#values) {
print "$val\n";
}
exit 0;
Output for your example:
$ perl dummy.pl
casper1#foo.com
casper2#foo.com
casper3#foo.com
casper.casper4#foo.com

perl- extract duplicate sequences from a multi-fasta file

I have a big fasta file input.fasta which consists many duplicate sequences. I want to enter a header name and extract out all the sequences with the matching header. I know this could be done easily done with awk/sed/grep but I need a Perl code.
input.fasta
>OGH38127_some_organism
PAAALGFSHLARQEDSALTPKHYTWTAPGEGDVRAPCPVLNTLANHEFLPHNGKNITVDK
AITALGDAMNISPALATTFFTGGLKTNPTPNATWFDLDMLHKHNVLEHDGSLSRRDMHFD
TSNKFDAATFANFLSYFDANATVLGVNETADARARHAYDMSKMNPEFTITSSMLPIMVGE
SVMMMLVWGSVEEPGAQRDYFEYFFRNERLPVELGWTPGETEIGVPVVTAMITAMVAASP
TDVP
>ABC14110_some_different_org_name
WWVAPGPGDSRGPCPGLNTLANHGYLPHDGKGITLSILADAMLDGFNIARSDALLLFTQ
AIRTSPQYPATNSFNLHDLGRDQLNRHNVLEHDASLSRADDFFGSNHIFNETVFDESRAY
AMLANSKIARQINSKAFNPQYKFTSKTEQFSLGEIAAPIIAFGNSTSGEVNRTLVEYFFM
NERLPIELGWKKSEDGIALDDILRVTQMISKAASLITPSALSWTAETLTP
>OGH38127_some_organism
LPWSRPGPGAVRAPCPMLNTLANHGFLPHDGKNISEARTVQALGRALNIEKELSQFLFEK
ALTTNPHTNATTFSLNDLSRHNLLEHDASLSRQDAYFGDNHDFNQTIFDETRSYWPHPVI
DIQAAALSRQARVNTSIAKNPTYNMSELGLDFSYGETAAYILILGDKDFGKVNRSWVEYL
FENERLPVELGWTRHNETITSDDLNTMLEKVVN
.
.
.
I have tried with the following script but it is not giving any output.
script.pl
#!/perl/bin/perl -w
use strict;
use warnings;
print "Enter a fasta header to search for:\n";
my $head = <>;
my $file = "input.fasta";
open (READ, "$file") || die "Cannot open $file: $!.\n";
my %seqs;
my $header;
while (my $line = <READ>){
chomp $line;
$line =~ s/^>(.*)\n//;
if ($line =~ m/$head/){
$header = $1;
}
}
close (READ);
open( my $out , ">", "out.fasta" ) or die $!;
my #count_seq = keys %seqs;
foreach (#count_seq){
print $out $header, "\n";
print $out $seqs{$header}, "\n";
}
exit;
Please help me correct this script.
Thanks!
If you use the Bioperl module Bio::SeqIO to handle the parsing of the fasta files, it becomes really simple:
#!/usr/bin/perl
use warnings;
use strict;
use Bio::SeqIO;
my ($file, $name) = #ARGV;
my $in = Bio::SeqIO->new(-file => $file, -format => "fasta");
my $out = Bio::SeqIO->new(-fh => \*STDOUT, -format => "fasta");
while (my $s = $in->next_seq) {
$out->write_seq($s) if $s->display_id eq $name;
}
run with perl grep_fasta.pl input.fasta OGH38127_some_organism
There's no need to store the sequences in memory, you can print them directly when reading the file. Use a flag variable ($inside in the example) that tells you whether you're reading the desired sequence or not.
#! /usr/bin/perl
use warnings;
use strict;
my ($file, $header) = #ARGV;
my $inside;
open my $in, '<', $file or die $!;
while (<$in>) {
$inside = $1 eq $header if /^>(.*)/;
print if $inside;
}
Run as
perl script.pl file.fasta OGH38127_some_organism > output.fasta

Tie file not working for loops

I have a script which pulls all the pm files in my directory and look for certain pattern and change them to desired value, i tried Tie::File but it's not looking to content of the file
use File::Find;
use Data::Dumper qw(Dumper);
use Tie::File;
my #content;
find( \&wanted, '/home/idiotonperl/project/');
sub wanted {
push #content, $File::Find::name;
return;
}
my #content1 = grep{$_ =~ /.*.pm/} #content;
#content = #content1;
for my $absolute_path (#content) {
my #array='';
print $absolute_path;
tie #array, 'Tie::File', $absolute_path or die qq{Not working};
print Dumper #array;
foreach my $line(#array) {
$line=~s/PERL/perl/g;
}
untie #array;
}
the output is
Not working at tiereplacer.pl line 22.
/home/idiotonperl/main/content.pm
this is not working as intended(looking into the content of all pm file), if i try to do the same operation for some test file under my home for single file, the content is getting replaced
#content = ‘home/idiotonperl/option.pm’
it’s working as intended
I would not recommend to use tie for that. This simple code below should do as asked
use warnings;
use strict;
use File::Copy qw(move);
use File::Glob ':bsd_glob';
my $dir = '/home/...';
my #pm_files = grep { -f } glob "$dir/*.pm";
foreach my $file (#pm_files)
{
my $outfile = 'new_' . $file; # but better use File::Temp
open my $fh, '<', $file or die "Can't open $file: $!";
open my $fh_out, '>', $outfile or die "Can't open $outfile: $!";
while (my $line = <$fh>)
{
$line =~ s/PERL/perl/g;
print $fh_out $line; # write out the line, changed or not
}
close $fh;
close $fh_out;
# Uncomment after testing, to actually overwrite the original file
#move $outfile, $file or die "Can't move $outfile to $file: $!";
}
The glob from File::Glob allows you to specify filenames similarly as in the shell. See docs for accepted metacharacters. The :bsd_glob is better for treatment of spaces in filenames. †
If you need to process files recursively then you indeed want a module. See File::Find::Rule
The rest of the code does what we must do when changing file content: copy the file. The loop reads each line, changes the ones that match, and writes each line to another file. If the match fails then s/ makes no changes to $line, so we just copy those that are unchanged.
In the end we move that file to overwrite the original using File::Copy.
The new file is temporary and I suggest to create it using File::Temp.
† The glob pattern "$dir/..." allows for an injection bug for directories with particular names. While this is very unusual it is safer to use the escape sequence
my #pm_files = grep { -f } glob "\Q$dir\E/*.pm";
In this case File::Glob isn't needed since \Q escapes spaces as well.
Solution using my favorite module: Path::Tiny. Unfortunately, it isn't a core module.
use strict;
use warnings;
use Path::Tiny;
my $iter = path('/some/path')->iterator({recurse => 1});
while( my $p = $iter->() ) {
next unless $p->is_file && $p =~ /\.pm\z/i;
$p->edit_lines(sub {
s/PERL/perl/;
#add more line-editing
});
#also check the path(...)->edit(...) as an alternative
}
Working fine for me:
#!/usr/bin/env perl
use common::sense;
use File::Find;
use Tie::File;
my #content;
find(\&wanted, '/home/mishkin/test/t/');
sub wanted {
push #content, $File::Find::name;
return;
}
#content = grep{$_ =~ /.*\.pm$/} #content;
for my $absolute_path (#content) {
my #array='';
say $absolute_path;
tie #array, 'Tie::File', $absolute_path or die "Not working: $!";
for my $line (#array) {
$line =~ s/PERL/perl/g;
}
untie #array;
}

Merge txt files in Perl, but modify them before, leaving original files untouched

I've already posted a question and fixed the problem in my code, but now my "specification has changed" so to say, and now I need to change some things about it.
Here's a code that takes all .txt files from the current directory, cuts off the last line of the first file, the first and the last line of every following file and the first line of the last file and writes everything in a new file (in other words: merge all files, deleting header and footer so that the new file has only one header and one footer).
#!/usr/bin/perl
use warnings;
use Cwd;
use Tie::File;
use Tie::Array;
my $cwd = getcwd();
my $buff = '';
# Get all files in cwd.
my #files = grep ( -f ,<*.txt>);
# Cut off header and footer of $files [1] to $files[$#files-1],
# but only footer of $files[0] and header of $#files[$#files]
for (my $i = 0; $i <= $#files; $i++) {
print 'Opening ' . $files[$i] . "\n";
tie (#lines, Tie::File, $files[$i]) or die "can't update $file: $!";
splice #lines, 0, 1 unless $i == 0;
splice #lines, -1, 1 unless $i == $#files;
untie #lines;
open (file, "<", $files[$i]) or die "can't update $file: $!";
while (my $line =<file>) {
$buff .= $line;
}
close file;
}
# Write the buffer to a new file.
my $allfilename = $cwd.'/Trace.txt';
print 'Writing all files into new file: ' . $allfilename . "\n";
open $outputfile, ">".$allfilename or die "can't write to new file $outputfile: $!";
# Write the buffer into the output file.
print $outputfile $buff;
close $outputfile;
My problem: I don't want to change the original files, but my code does exactly that and I'm having trouble coming up with a solution. The simplest way (simple meaning not having to change too much code) would now be, to just copy all the files to a tmp directory, messing around with them and leaving the original files untouched. Problem: a simple use of dircopy doesn't do it for me, since you have to give a new tmp dir to the dircopy function, making the code only usable for Windows or UNIX systems (but I need portability).
The next approach would be to make use of the File::Temp module but I'm really having trouble with the docs on this one.
Does anybody have a good idea on this one?
I suspected that you didn't really want your original files modified when I answered your previous question.
I don't understand why you've gone back to accumulating all the text in a buffer before printing it, or why you've removed use strict, which is essential to any well-written Perl code.
Here's my previous solution modified to leave the input data untouched.
use strict;
use warnings;
use Tie::File;
my #files = grep -f, glob '*.txt';
my $all_filename = 'Trace.txt';
open my $out_fh, '>', $all_filename or die qq{Unable to open "$all_filename" for output: $!};
for my $i ( 0 .. $#files ) {
my $file = $files[$i];
next if $file eq $all_filename;
print "Opening $file\n";
tie my #lines, 'Tie::File', $file or die qq{Can't open "$file": $!};
my ($start, $end) = (0, $#lines);
++$start unless $i == 0;
--$end unless $i == $#files;
print $out_fh "$_\n" for #lines[$start..$end];
}
close $out_fh;
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
my $outfile = 'Trace.txt';
# Get all files in cwd.
my #files = grep { -f && $_ ne $outfile } <*.txt>;
open my $outfh, '>', $outfile;
for my $file (#files) {
my #lines = do { local #ARGV = $file; <> };
shift #lines unless $file eq $files[0];
pop #lines unless $file eq $files[-1];
print $outfh #lines;
}
Just do not use Tie::File. Or is there a reason you do this, for example all your files together do not fit your memory or something?
A version very close to your current implementation would be something like the following (untested) code. It just skips the part where you update the file, just to reopen and read it afterwards. (Note that this is certainly not a very effective or overly elegant way to do this, it just sticks to your implementation as close as possible)
#!/usr/bin/perl
use warnings;
use Cwd;
# use Tie::File;
# use Tie::Array;
my $cwd = getcwd();
my $buff = '';
# Get all files in cwd.
my #files = grep ( -f ,<*.txt>);
# Cut off header and footer of $files [1] to $files[$#files-1],
# but only footer of $files[0] and header of $#files[$#files]
for (my $i = 0; $i <= $#files; $i++) {
print 'Opening ' . $files[$i] . "\n";
open (my $fh, "<", $files[$i]) or die "can't open $file for reading: $!";
my #lines = <$fh>;
splice #lines, 0, 1 unless $i == 0;
splice #lines, -1, 1 unless $i == $#files;
foreach my $line (#lines) {
$buff .= $line;
}
}
# Write the buffer to a new file.
my $allfilename = $cwd.'/Trace.txt';
print 'Writing all files into new file: ' . $allfilename . "\n";
open $outputfile, ">".$allfilename or die "can't write to new file $outputfile: $!";
# Write the buffer into the output file.
print $outputfile $buff;
close $outputfile;
Based on Miller's answer, but most suitable for large files.
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
my $outfile = 'Trace.txt';
# Get all files in cwd.
my #files = grep { -f && $_ ne $outfile } <*.txt>;
open my $outfh, '>', $outfile;
my $counter = 0;
for my $file (#files) {
open my $fh, '<', $file;
my ($line, $prev) = ('', '');
my $l = 0;
while ($line = <$fh>) {
print $outfh $prev unless $l++ == 1 and $counter > 0;
$prev = $line;
}
$counter++;
print $outfh $prev if $counter == #files and $l > 0;
close $fh;
}

Loop through file in perl and remove strings with less than 4 characters

I am trying to bring a file loop through it and remove any strings that have less than four characters in it and then print the list. I come from a javascript world and perl is brand new to me.
use strict;
use warnings;
sub lessThan4 {
open( FILE, "<names.txt" );
my #LINES = <FILE>;
close( FILE );
open( FILE, ">names.txt" );
foreach my $LINE ( #LINES ) {
print FILE $LINE unless ( $LINE.length() < 4 );
}
close( FILE );
}
use strict;
use warnings;
# automatically throw exception if open() fails
use autodie;
sub lessThan4 {
my #LINES = do {
# modern perl uses lexical, and three arg open
open(my $FILE, "<", "names.txt");
<$FILE>;
};
# remove newlines
chomp(#LINES);
open(my $FILE, ">", "names.txt");
foreach my $LINE ( #LINES ) {
print $FILE "$LINE\n" unless length($LINE) < 4;
# possible alternative to 'unless'
# print $FILE "$LINE\n" if length($LINE) >= 4;
}
close($FILE);
}
You're basically there. I hope you'll find some comments on your code useful.
# Well done for including these. So many new Perl users don't
use strict;
use warnings;
# Perl programs traditionally use all lower-case subroutine names
sub lessThan4 {
# 1/ You should use lexical variables for filehandles
# 2/ You should use the three-argument version of open()
# 3/ You should always check the return value from open()
open( FILE, "<names.txt" );
# Upper-case variable names in Perl are assumed to be global variables.
# This is a lexical variable, so name it using lower case.
my #LINES = <FILE>;
close( FILE );
# Same problems with open() here.
open( FILE, ">names.txt" );
foreach my $LINE ( #LINES ) {
# This is your biggest problem. Perl doesn't yet embrace the idea of
# calling methods to get properties of a variable. You need to call
# length() as a function.
print FILE $LINE unless ( $LINE.length() < 4 );
}
close( FILE );
}
Rewriting to take all that into account, we get the following:
use strict;
use warnings;
sub less_than_4 {
open( my $in_file_h, '<', 'names.txt' ) or die "Can't open file: $!";
my #lines = <$in_file_h>;
close( $in_file_h );
open( my $out_file_h, '>', 'names.txt' ) or die "Can't open file: $!";
foreach my $line ( #lines ) {
# Note: $line will include the newline character, so you might need
# to increase 4 to 5 here
print $out_file_h $line unless length $line < 4;
}
close( $out_file_h );
}
I am trying to bring a file loop through it and remove any strings that have less than four characters in it and then print the list.
I suppose you need to remove strings from the file which are less than 4 chars in length.
#!/usr/bin/perl
use strict;
use warnings;
open ($FH, "<", "names.txt");
my #final_list;
while (my $line = <$FH>) {
map {
length($_) > 4 and push (#final_list, $_) ;
} split (/\s/, $line);
}
print "\nWords with more than 4 chars: #final_list\n";
#Please try this one:
use strict;
use warnings;
my #new;
while(<DATA>)
{
#Push all the values less than 4 characters
push(#new, $_) unless(length($_) > '4');
}
print #new;
__DATA__
Williams
John
Joe
Lee
Albert
Francis
Sun