how can I choose particular lines from a file by using a second file - perl

I am trying to filter a file (trying to find the best blast hits) by using a second file.
The file that I want to filter looks like this:
conserved1 chr22 100.00 92 0 0 1 92 19679676 19679767 2e-44 182
.....................
The second file (which is the first input in my script) that I’m using is like this:
conserved1 92
conserved2 76
.....................
(the first column is the name of my ‘item’ which is exactly with the first column of the previous file, and the second column is the size).
I stored the second file in a hash in order to connect the first file with the sizes of the conserved elements and to filter only the lines which the size (4th column) be the 70% of the size (from the 2nd file).
I wrote this script for that purpose, it works but it prints each chosen line more than once.
How can I fix this?
my $size_file = $ARGV[0];
my $alignment_file = $ARGV[1];
open my $con_info, $size_file or die "Could not open $size_file: $!";
my %hash;
while (<$con_info>)
{
chomp;
my ($key, $val) = split /\t/;
$hash{$key} .= exists $hash{$key} ? "$val" : $val;
}
#print "# %hash\n", Dump \%hash;
#print %hash;
#print "#{[%hash]}";
close $con_info;
open my $al_info, $alignment_file or die "Could not open $alignment_file: $!";
while (my $line = <$al_info>) {
chomp;
my#data = split('\t', $line);
my $con_name = $data[0];
my $evalue = $data[10];
my $percent = $data[2];
my $length = $data[3];
# print $con_name. "\n";
foreach my $key (keys %hash) {
if ($key == $con_name) {
#print "key: $key, value: $hash{$key}\n";
if ($evalue <= 1e-4 && $length >= 0.70 * $hash{$key}) {
print $line;
}
}
}
}
The output should be the first file (the file which is at the first code box) but with less lines, the lines that passes through the last if condition.
Thank you very- very much for your help!!!

if ($key == $con_name)
should be
if ($key eq $con_name)
as this should be string comparison.
And you don't really need foreach loop, just to pick one particular key:
while (my $line = <$al_info>) {
chomp($line);
my #data = split('\t', $line);
# my $con_name = $data[0];
# my $percent = $data[2];
# my $length = $data[3];
# my $evalue = $data[10];
my ($con_name, $percent, $length, $evalue) = #data[0,2,3,10];
# print $con_name. "\n";
if ($evalue <= 1e-4 && $length >= 0.70 * $hash{$con_name}) {
print $line;
}
}

Related

truncate all lines in a file while preserving whole words

I'm trying to shorten each line of a file to 96 characters while preserving whole words. If a line is under or equal to 96 chars, I want to do nothing with that line. If it over 96 chars, I want it cut it down to the closest amount less than 96 while preserving whole words. When I run this code, I get a blank file.
use Text::Autoformat;
use strict;
use warnings;
#open the file
my $filename = $ARGV[0]; # store the 1st argument into the variable
open my $file, '<', $filename;
open my $fileout, '>>', $filename.96;
my #file = <$file>; #each line of the file into an array
while (my $line = <$file>) {
chomp $line;
foreach (#file) {
#######
sub truncate($$) {
my ( $line, $max ) = #_;
# always do nothing if already short enough
( length( $line ) <= $max ) and return $line;
# forced to chop a word anyway
if ( $line =~ /\s/ ) {
return substr( $line, 0, $max );
}
# otherwise truncate on word boundary
$line =~ s/\S+$// and return $line;
die; # unreachable
}
#######
my $truncated = &truncate($line,96);
print $fileout "$truncated\n";
}
}
close($file);
close($fileout);
You have no output because you have no input.
1. my #file = <$file>; #each line of the file into an array
2. while (my $line = <$file>) { ...
The <$file> operation line 1 is in list context "consumes" all the input and loads it into #file. The <$file> operation in line 2 has no more input to read, so the while loop does not execute.
You either want to stream from the filehandle
# don't call #file = <$file>
while (my $line = <$file>) {
chomp $line;
my $truncated = &truncate($line, 96);
...
}
Or read from the array of file contents
my #file = <$file>;
foreach my $line (#file) {
chomp $line;
my $truncated = &truncate($line, 96);
...
}
If the input is large, the former format has the advantage of just loading a single line into memory at a time.

How do I speed up pattern recognition in perl

This is the program as it stands right now, it takes in a .fasta file (a file containing genetic code), creates a hash table with the data and prints it, however, it is quite slow. It splits a string an compares it against all other letters in the file.
use strict;
use warnings;
use Data::Dumper;
my $total = $#ARGV + 1;
my $row;
my $compare;
my %hash;
my $unique = 0;
open( my $f1, '<:encoding(UTF-8)', $ARGV[0] ) or die "Could not open file '$ARGV[0]' $!\n";
my $discard = <$f1>;
while ( $row = <$f1> ) {
chomp $row;
$compare .= $row;
}
my $size = length($compare);
close $f1;
for ( my $i = 0; $i < $size - 6; $i++ ) {
my $vs = ( substr( $compare, $i, 5 ) );
for ( my $j = 0; $j < $size - 6; $j++ ) {
foreach my $value ( substr( $compare, $j, 5 ) ) {
if ( $value eq $vs ) {
if ( exists $hash{$value} ) {
$hash{$value} += 1;
} else {
$hash{$value} = 1;
}
}
}
}
}
foreach my $val ( values %hash ) {
if ( $val == 1 ) {
$unique++;
}
}
my $OUTFILE;
open $OUTFILE, ">output.txt" or die "Error opening output.txt: $!\n";
print {$OUTFILE} "Number of unique keys: " . $unique . "\n";
print {$OUTFILE} Dumper( \%hash );
close $OUTFILE;
Thanks in advance for any help!
It is not clear from the description what is wanted from this script, but if you're looking for matching sets of 5 characters, you don't actually need to do any string matching: you can just run through the whole sequence and keep a tally of how many times each 5-letter sequence occurs.
use strict;
use warnings;
use Data::Dumper;
my $str; # store the sequence here
my %hash;
# slurp in the whole file
open(IN, '<:encoding(UTF-8)', $ARGV[0]) or die "Could not open file '$ARGV[0]' $!\n";
while (<IN>) {
chomp;
$str .= $_;
}
close(IN);
# not sure if you were deliberately omitting the last two letters of sequence
# this looks at all the sequence
my $l_size = length($str) - 4;
for (my $i = 0; $i < $l_size; $i++) {
$hash{ substr($str, $i, 5) }++;
}
# grep in a scalar context will count the values.
my $unique = grep { $_ == 1 } values %hash;
open OUT, ">output.txt" or die "Error opening output.txt: $!\n";
print OUT "Number of unique keys: ". $unique."\n";
print OUT Dumper(\%hash);
close OUT;
It might help to remove searching for information that you already have.
I don't see that $j depends upon $i so you're actually matching values to themselves.
So you're getting bad counts as well. It works for 1, because 1 is the square of 1.
But if for each five-character string you're counting strings that match, you're going
to get the square of the actual number.
You would actually get better results if you did it this way:
# compute it once.
my $lim = length( $compare ) - 6;
for ( my $i = 0; $i < $lim; $i++ ){
my $vs = substr( $compare, $i, 5 );
# count each unique identity *once*
# if it's in the table, we've already counted it.
next if $hash{ $vs };
$hash{ $vs }++; # we've found it, record it.
for ( my $j = $i + 1; $j < $lim; $j++ ) {
my $value = substr( $compare, $j, 5 );
$hash{ $value }++ if $value eq $vs;
}
}
However, it could be an improvement on this to do an index for your second loop
and let the c-level of perl do your matching for you.
my $pos = $i;
while ( $pos > -1 ) {
$pos = index( $compare, $vs, ++$pos );
$hash{ $vs }++ if $pos > -1;
}
Also, if you used index, and wanted to omit the last two characters--as you do, it might make sense to remove those from the characters you have to search:
substr( $compare, -2 ) = ''
But you could do all of this in one pass, as you loop through file. I believe the code
below is almost an equivalent.
my $last_4 = '';
my $last_row = '';
my $discard = <$f1>;
# each row in the file after the first...
while ( $row = <$f1> ) {
chomp $row;
$last_row = $row;
$row = $last_4 . $row;
my $lim = length( $row ) - 5;
for ( my $i = 0; $i < $lim; $i++ ) {
$hash{ substr( $row, $i, 5 ) }++;
}
# four is the maximum we can copy over to the new row and not
# double count a strand of characters at the end.
$last_4 = substr( $row, -4 );
}
# I'm not sure what you're getting by omitting the last two characters of
# the last row, but this would replicate it
foreach my $bad_key ( map { substr( $last_row, $_ ) } ( -5, -6 )) {
--$hash{ $bad_key };
delete $hash{ $bad_key } if $hash{ $bad_key } < 1;
}
# grep in a scalar context will count the values.
$unique = grep { $_ == 1 } values %hash;
You may be interested in this more concise version of your code that uses a global regex match to find all the subsequences of five characters. It also reads the entire input file in one go, and removes the newlines afterwards.
The path to the input file is expected as a parameter on the command line, and the output is sent to STDIN, and can be redirected to a file on the command line, like this
perl subseq5.pl input.txt > output.txt
I've also used Data::Dump instead of Data::Dumper because I believe it to be vastly superior. However it is not a core module, and so you will probably need to install it.
use strict;
use warnings;
use open qw/ :std :encoding(utf-8) /;
use Data::Dump;
my $str = do { local $/; <>; };
$str =~ tr|$/||d;
my %dups;
++$dups{$1} while $str =~ /(?=(.{5}))/g;
my $unique = grep $_ == 1, values %dups;
print "Number of unique keys: $unique\n";
dd \%dups;

how to write my results to external file in perl

I am trying to read some particular columns from myu data into my output file, i succeed in this reading one cloumn at a time but i want to read some more columns of my interest at a time (i have list of column i want to extract in a separate tex file) because extract individual column and joining them to make one separate file will become hectic to me, here is the code i tried to extract single coulmn,
#!/usr/bin/perl
use strict;
use warnings;
open (DATA, "<file.txt") or die ("Unable to open file");
my $search_string = "IADC512444";
my $header = <DATA>;
my #header_titles = split /\t/, $header;
my $extract_col = 0;
for my $header_line (#header_titles) {
last if $header_line =~ m/$search_string/;
$extract_col++;
}
print "Extracting column $extract_col\n";
while ( my $row = <DATA> ) {
last unless $row =~ /\S/;
chomp $row;
my #cells = split /\t/, $row;
print "$cells[$extract_col] ";
}
is there any possibility to extract all columns at a time instead of only IADC512444 i want from my textfile into outfile on to my harddisc? please help me in solving this problem,
Thanks
If you need to print the contents to a file on disk then you should open a file in write mode and write to it. Also if you want more columns you can do that by accessing corresponding element in the array cells. In this example i am printing the column you are printing plus column 1 and 2
open(OUT_FILE,">path_to_out_file") || die "cant open file...";
while ( my $row = <DATA> ) {
last unless $row =~ /\S/;
chomp $row;
my #cells = split /\t/, $row;
#print "$cells[$extract_col] ";
print OUT_FILE "$cells[$extract_col],$cells[1],$cells[2]\n";
}
close(OUT_FILE)
I have tweaked the code little bit to suit your requirement.
In the variable req_hdr_string you should say the column names which you require separated by ,
So it will be splitted and stored in a hash.
Then from the header i get the position of the column and print only those
#!/usr/bin/perl
use strict;
use warnings;
open (DATA, "<h11.txt") or die ("Unable to open file");
my $req_hdr_string = "abc,ghi,mno,";
my %req_hdrs = ();
my %extract_col = ();
foreach(split /,/, $req_hdr_string)
{
print "req hdr is:$_\n";
$req_hdrs{$_} = $_;
}
my $index = 0;
my $header = <DATA>;
chomp $header;
foreach (split /\t/, $header)
{
print "input is:|$_|\n";
if(exists $req_hdrs{$_})
{
print "\treq index is:$index\n";
$extract_col{$index} = 1;
}
$index++;
}
open(OUT_FILE,">out_file") || die "cant open file...";
while ( my $row = <DATA> )
{
last unless $row =~ /\S/;
chomp $row;
my #cells = split /\t/, $row;
foreach $index (sort keys%extract_col)
{
print OUT_FILE "$cells[$index],";
}
print OUT_FILE "\n";
}
close(OUT_FILE);
close(DATA);

Manipulating files according to indexes by perl

I am working on some genome data and I have 2 files ->
File1
A1 1 10
A1 15 20
A2 2 11
A2 13 16
File2
>A1
CTATTATTTATCGCACCTACGTTCAATATTACAGGCGAACATACCTACTA
AAGTGTGTTAATTAATTAATGCTTGTAGGACATAATAATAACAATTGAAT
>A2
GTCTGCACAGCCGCTTTCCACACAGACATCATAACAAAAAATTTCCACCA
AACCCCCCCCTCCCCCCGCTTCTGGCCACAGCACTTAAACACATCTCTGC
CAAACCCCAAAAACAAAGAACCCTAACACCAGCCTAACCAGATTTCAAAT
In file 1, 2nd and 3rd column represents the indexes in File2. So I want that, if character in column1 of file1 matches with character followed by symbol (>) in file2 , then from next line of that file2 give back the substring according to indexes in col2 and col3 of file1. (sorry, I know its complicated) Here is the desire output ->
Output
>A1#1:10
CTATTATTTA
>A1#15:20
ACCTA
>A2#2:11
TCTGCACAGC
>A2#13:16
GCTT
I know if I have only 1 string I can take out sub-string very easily ->
#ARGV or die "No input file specified";
open $first, '<',$ARGV[0] or die "Unable to open input file: $!";
$string="GATCACAGGTCTATCACCCTATTAACCACTCACGGGAGCTCTCCATGCAT";
while (<$first>)
{
#cols = split /\s+/;
$co=$cols[1]-1;
$length=$cols[2]-$co;
$fragment = substr $string, $co, $length;
print ">",$cols[0],"#",$cols[1],":",$cols[2],"\n",$fragment,"\n";
}
but here my problem is when should I input my second file and how should I match the character in col1 (of file1) with character in file2 (followed by > symbol) and then how to get substring?
I wasnt sure if they were all one continuous line or separate lines.
I set it up as continuous for now.
Basically, read the 2nd file as master.
Then you can process as many index files as you need.
You can use hash of arrays to help with the indexing.
push #{$index{$key}}, [$start,$stop];
use strict;
my $master_file = "dna_master.txt";
if ($#ARGV) {
print "Usage: $0 [filename(s)]\n";
exit 1;
}
my %Data = read_master($master_file);
foreach my $index_file (#ARGV) {
my %Index = read_index($index_file);
foreach my $key (sort keys %Index) {
foreach my $i (#{$Index{$key}}) {
my ($start,$stop) = #$i;
print ">$key#$start:$stop\n";
my $pos = $start - 1;
my $count = $stop - $start + 1;
print substr($Data{$key},$pos,$count)."\n";
}
}
}
sub read_file {
my $file = shift;
my #lines;
open(FILE, $file) or die "Error: cannot open $file\n$!";
while(<FILE>){
chomp; #remove newline
s/(^\s+|\s+$)//g; # strip lead/trail whitespace
next if /^$/; # skip blanks
push #lines, $_;
}
close FILE;
return #lines;
}
sub read_index {
my $file = shift;
my #lines = read_file($file);
my %index;
foreach (#lines) {
my ($key,$start,$stop) = split /\s+/;
push #{$index{$key}}, [$start,$stop];
}
return %index;
}
sub read_master {
my $file = shift;
my %master;
my $key;
my #lines = read_file($file);
foreach (#lines) {
if ( m{^>(\w+)} ) { $key = $1 }
else { $master{$key} .= $_ }
}
return %master;
}
Load File2 in a Hash, with A1, A2... as keys, and the DNA sequence as value. This way you can get the DNA sequence easily.
This 2nd update turns the master file into a hash of arrays as well.
This treats each row in the 2nd file as individual sequences.
use strict;
my $master_file = "dna_master.txt";
if ($#ARGV) {
print "Usage: $0 [filename(s)]\n";
exit 1;
}
my %Data = read_master($master_file);
foreach my $index_file (#ARGV) {
my %Index = read_index($index_file);
foreach my $key (sort keys %Index) {
foreach my $i (#{$Index{$key}}) {
my ($start,$stop) = #$i;
print ">$key#$start:$stop\n";
my $pos = $start - 1;
my $count = $stop - $start + 1;
foreach my $seq (#{$Data{$key}}) {
print substr($seq,$pos,$count)."\n";
}
}
}
}
sub read_file {
my $file = shift;
my #lines;
open(FILE, $file) or die "Error: cannot open $file\n$!";
while(<FILE>){
chomp; #remove newline
s/(^\s+|\s+$)//g; # strip lead/trail whitespace
next if /^$/; # skip blanks
push #lines, $_;
}
close FILE;
return #lines;
}
sub read_index {
my $file = shift;
my #lines = read_file($file);
my %index;
foreach (#lines) {
my ($key,$start,$stop) = split /\s+/;
push #{$index{$key}}, [$start,$stop];
}
return %index;
}
sub read_master {
my $file = shift;
my %master;
my $key;
my #lines = read_file($file);
foreach (#lines) {
if ( m{^>(\w+)} ) { $key = $1 }
else { push #{ $master{$key} }, $_ }
}
return %master;
}
Output:
>A1#1:10
CTATTATTTA
AAGTGTGTTA
>A1#15:20
ACCTAC
ATTAAT
>A2#2:11
TCTGCACAGC
ACCCCCCCCT
AAACCCCAAA
>A2#13:16
GCTT
CCCC
ACAA

How to remove new line characters until each line has a specific number of instances of a specific character?

I have a real mess of a pipe-delimited file, which I need to load to a database. The file has 35 fields, and thus 34 pipes. One of the fields is comprised of HTML code which, for some records, includes multiple line breaks. Unfortunately there's no patter as to where the line breaks fall.
The solution I've come up with is to count the number of pipes in each line and until that number reaches 34, remove the new line character from that line. I'm not incredibly well-versed in Perl, but I think I'm close to achieving what I'm looking to do. Any suggestions?
#!/usr/local/bin/perl
use strict;
open (FILE, 'test.txt');
while (<FILE>) {
chomp;
my $line = $_;
#remove null characters that are included in file
$line =~ tr/\x00//;
#count number of pipes
my $count = ($line =~ tr/|//);
#each line should have 34 pipes
while ($count < 34) {
#remove new lines until line has 34 pipes
$line =~ tr/\r\n//;
$count = ($line =~ tr/|//);
print "$line\n";
}
}
This should work I guess.
#!/usr/bin/perl
use strict;
open (FILE, 'test.txt');
my $num_pipes = 0, my $line_num = 0;
my $tmp = "";
while (<FILE>)
{
$line_num++;
chomp;
my $line = $_;
$line =~ tr/\x00//; #remove null characters that are included in file
$num_pipes += ($line =~ tr/|//); #count number of pipes
if ($num_pipes == 34 && length($tmp))
{
$tmp .= $line;
print "$tmp\n";
# Reset values.
$tmp = "";
$num_pipes = 0;
}
elsif ($num_pipes == 34 && length($tmp) == 0)
{
print "$line\n";
$num_pipes = 0;
}
elsif ($num_pipes < 34)
{
$tmp .= $line;
}
elsif ($num_pipes > 34)
{
print STDERR "Error before line $line_num. Too many pipes ($num_pipes)\n";
$num_pipes = 0;
$tmp = "";
}
}
Twiddle with $/, the input record separator?
while (!eof(FILE)) {
# assemble a row of data: 35 pipe separated fields, possibly over many lines
my #fields = ();
{
# read 34 fields from FILE:
local $/ = '|';
for (1..34) {
push #fields, scalar <FILE>;
}
} # $/ is set back to original value ("\n") at the end of this block
push #fields, scalar <FILE>; # read last field, which ends with newline
my $line = join '|', #fields;
... now you can process $line, and you already have the #fields ......
}