Read specific column in perl - perl

I'm new in perl. I have below text file and from there I want only one Time column and next columns are values. How can I create a text file with my desire output in perl.
Time Value Time Value Time Value
1 0.353366497 1 0.822193251 1 0.780866396
2 0.168834182 2 0.865650713 2 0.42429447
3 0.323540698 3 0.865984245 3 0.856875894
4 0.721728497 4 0.634773162 4 0.563059042
5 0.545131335 5 0.029808531 5 0.645993399
6 0.143720835 6 0.949973296 6 0.14425803
7 0.414601876 7 0.53421424 7 0.826148814
8 0.194818367 8 0.942334356 8 0.837107013
9 0.291448263 9 0.242588271 9 0.939609775
10 0.500159997 10 0.428897293 10 0.41946448
I've tried below code:
use strict;
use warnings;
use IO::File;
my $result;
my #files = (q[1.txt],q[2.txt],q[3.txt]);
my #fhs = ();
foreach my $file (#files) {
my $fh = new IO::File $file, O_RDONLY;
push #fhs, $fh if defined $fh;
}
while(1) {
my #lines = map { $_->getline } #fhs;
last if grep { not defined $_ } #lines[0..(#fhs-1)];
my #result=join(qq[\t], map { s/[\r?\n]+/ /g; $_ } #lines ) . qq[\r\n];
open (MYFILE, '>>Result.txt');
print (MYFILE "#result");
close (MYFILE);
}

I'd go with split.
use warnings;
use strict;
open (my $f, '<', 'your-file.dat') or die;
while (my $line = <$f>) {
my #elems = split ' ', $line;
print join "\t", #elems[0,1,3,5];
print "\n";
}

This is a one-liner; no need to write a script:
$ perl -lanE '$,="\t"; say #F[0,1,3,5]' 1.txt 2.txt 3.txt
If you like, you can shorten it to:
$ perl -lanE '$,="\t"; say #F[0,1,3,5]' [123].txt

Right now, you're just concatenating the lines of the files together. If that doesn't give you the output you like, you need to chop some columns out.
Since your output looks like you have tab delimited files as input, I split the lines coming in by tabs. And since you only wanted the second column, I only take the column at the first offset from the split.
my $line_num = 0;
while(1) {
my #lines = map { $_->getline } #fhs;
last if grep { not defined $_ } #lines[0..$#fhs];
$line_num++;
my #rows = map { [ split /\t/ ] } #lines;
my $time_val = $rows[0][0];
die "Time values are not all equal on line #$line_num!"
if grep { $time_val != $_->[0] } #rows
;
my $result = join( q[\t], $time_val, map { $_->[1] } #rows );
open (MYFILE, '>>Result.txt');
print (MYFILE "$result\n");
close (MYFILE);
}
Of course, there is no reason to do custom coding to split delimited columns:
use Text::CSV;
...
my $csv = Text::CSV->new( { sep_char => "\t" } );
while(1) {
my #rows = map { $csv->getline( $_ ) } #fhs;
last if grep { not defined $_ } #rows[0..$#fhs];
my ( $time_val, #time_vals ) = map { $_->[0] } #rows;
my #values = map { $_->[1] } #rows;
die "Time values are not all equal on line #$line_num!"
if grep { $time_val != $_ } #time_vals
;
my $result = join( q[\t], $time_val, #values );
...
}

use strict;
use warnings;
open(FH,"<","a.txt");
print "=========== A File content =========== \n";
my $a = `cat a.txt`;
print "$a\n";
my #temp = <>;
my (#arr, #entries, #final);
foreach ( #temp ) {
#arr = split ( " ", $_ );
push #entries, #arr;
}
close FH;
my #entries1 = #entries;
for(my $i = 7; $i<=$#entries; $i=$i+2) {
push #final, $entries[$i];
}
my $size = scalar #final;
open FH1, ">", "b.txt";
print FH1 "Time \t Value\n";
for(my $i = 0; $i < $size; $i++) {
my $j = $i+1;
print FH1 "$j \t $final[$i]\n";
}
close FH1;
print "============ B file content ===============\n";
my $b = `cat b.txt`;
print "$b";
O/P:
=========== A File content ===========
Time Value Time Value Time Value
1 0.353366497 1 0.822193251 1 0.780866396
2 0.168834182 2 0.865650713 2 0.42429447
3 0.323540698 3 0.865984245 3 0.856875894
4 0.721728497 4 0.634773162 4 0.563059042
5 0.545131335 5 0.029808531 5 0.645993399
6 0.143720835 6 0.949973296 6 0.14425803
7 0.414601876 7 0.53421424 7 0.826148814
8 0.194818367 8 0.942334356 8 0.837107013
9 0.291448263 9 0.242588271 9 0.939609775
10 0.500159997 10 0.428897293 10 0.41946448
============ B file content ===============
Time Value
1 0.353366497
2 0.822193251
3 0.780866396
4 0.168834182
5 0.865650713
6 0.42429447
7 0.323540698
8 0.865984245
9 0.856875894
10 0.721728497
11 0.634773162
12 0.563059042
13 0.545131335
14 0.029808531
15 0.645993399
16 0.143720835
17 0.949973296
18 0.14425803
19 0.414601876
20 0.53421424
21 0.826148814
22 0.194818367
23 0.942334356
24 0.837107013
25 0.291448263
26 0.242588271
27 0.939609775
28 0.500159997
29 0.428897293
30 0.41946448

Related

Merging N no of files based on their first column in perl

My question is similar to this question posted earlier.
I am having many files which I need to merge them based on the presence or absence of the first column ID, but while merging I am getting lots of empty values in my output file, I want those empty values to be zero if it is not present in another file. The example below is based on only two files content, but I have many sample files like this format (tabular).
For example:
File1
ID Value
123 1
231 2
323 3
541 7
File2
ID Value
541 6
123 1
312 3
211 4
Expected Output:
ID File1 File2
123 1 1
231 2 0
323 3 0
541 7 6
312 0 3
211 0 4
Obtaining Output:
ID File1 File2
123 1 1
231 2
323 3
541 7 6
312 undef 3
211 undef 4
As you can see above I am getting output but in file2 column, it's not adding zero or leaving empty and in case of file1 column it is having undef value. I have checked undef values and then my final output gives zeros in place of undef values but still I am having those empty spaces. Please find my code below (hardcoded only for two files).
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
use Data::Dumper;
my $path = "/home/pranjay/Projects/test";
my #files = ("s1.txt","s2.txt");
my %classic_com;
my $cnt;
my $classic_txt;
my $sample_cnt = 0;
my $classic_txtcomb = "test_classic.txt";
open($classic_txt,">$path/$classic_txtcomb") or die "Couldn't open file
$classic_txtcomb for writing,$!";
print $classic_txt "#ID\t"."file1\tfile2\n";
foreach my $file(#files){
$sample_cnt++;
print "$sample_cnt\n";
open($cnt,"<$path/$file")or die "Couldn't open file $file for reading,$!";
while(<$cnt>){
chomp($_);
my #count = ();
next if($_=~/^ID/);
my #record=();
#record=split(/\t/,$_);
my $scnt = $sample_cnt -1;
if((exists($classic_com{$record[0]})) and ($sample_cnt > 0)){
${$classic_com{$record[0]}}[$scnt]=$record[1];
}else{
$count[$scnt] = "$record[1]";
$classic_com{$record[0]}= [#count];
}
}
}
my %final_txt=();
foreach my $key ( keys %classic_com ) {
#print "$key: ";
my #val = #{ $classic_com{$key} };
my #v;
foreach my $i ( #val ) {
if(not defined($i)){
$i = 0;
push(#v, $i);
}else{
push(#v, $i);
next;
}
}
$final_txt{$key} = [#v];
}
#print Dumper %classic_com;
while(my($key,$value)=each(%final_txt)){
my $val=join("\t", #{$value});
print $classic_txt "$key\t"."#{$value}"."\n";
}
Just read the input files into a hash of arrays. The topmost key is the ID, each inner array contains the value for file i on the i-th position. When printing, use the // defined-or operator to replace undefs with zeroes:
#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };
my %merged;
my $file_tally = 0;
while (my $file = shift) {
open my $in, '<', $file or die "$file: $!";
<$in>; # skip the header
while (<$in>) {
my ($id, $value) = split;
$merged{$id}[$file_tally] = $value;
}
++$file_tally;
}
for my $value (keys %merged) {
my #values = #{ $merged{$value} };
say join "\t", $value, map $_ // 0, #values[0 .. $file_tally - 1];
}
program.pl
my %val;
/ (\d+) \s+ (\d+) /x and $val{$1}{$ARGV} = $2 while <>;
pr( 'ID', my #f = sort keys %{{map%$_,values%val}} );
pr( $_, map$_//0, #{$val{$_}}{#f} ) for sort keys %val;
sub pr{ print join("\t",#_)."\n" }
Run:
perl program.pl s1.txt s2.txt
ID s1.txt s2.txt
123 1 1
211 0 4
231 2 0
312 0 3
323 3 0
541 7 6

Transposing an array of elements

I am trying to transpose an array.
I tried the following code...
#! /usr/bin/perl
use strict;
use warnings;
use autodie;
open my $fh, '<',"op.txt" || die "$!";
open my $wh , '>',"pwl.txt" || die "$!";
select ($wh);
while (my $line = <$fh>) {
my #rows = $line;
my #transposed;
for my $row (#rows) {
for my $column (0 .. $#{$row}) {
push(#{$transposed[$column]}, $row->[$column]);
}
}
for my $new_row (#transposed) {
for my $new_col (#{$new_row}) {
print $new_col, " ";
}
print "\n";
}
}
**********INPUT FILE******
1 2 3
4 5 6
7 8 9
********** EXPECTED OUTPUT FILE *******
1 4 7
2 5 8
3 6 9
******** GENERATED OUTPUT FILE *******
Currently couldn't able print anything. script shows the error
"can't use string ("1 4 7") as an array ref while "strict refs" in use
Reference:
used the following reference...
Transpose in perl
however in this reference example, array input lines are declared manually where as i am trying to process a array which is in a text file
could anybody help me where i did mistake?
Many Thanks
You did have to split input line as #dland suggested. But, there were a few other issues.
Here's the corrected code [please pardon the gratuitous style cleanup]:
#! /usr/bin/perl
use strict;
use warnings;
use autodie;
open my $fh, '<',"op.txt" || die "$!";
open my $wh , '>',"pwl.txt" || die "$!";
my #rows;
while (my $line = <$fh>) {
my #line = split(" ",$line);
push(#rows,\#line);
}
close($fh);
my #transposed;
for my $row (#rows) {
push(#transposed,[]);
}
my $rowidx = -1;
for my $rowptr (#rows) {
++$rowidx;
my $colidx = -1;
for my $rowval (#$rowptr) {
++$colidx;
###printf("R=%d C=%d\n",$rowidx,$colidx);
my $colptr = $transposed[$colidx];
$colptr->[$rowidx] = $rowval;
}
}
for my $new_row (#transposed) {
for my $new_col (#$new_row) {
print $wh $new_col, " ";
}
print $wh "\n";
}
close($wh);
Note: It's slightly harder to transpose a non-square matrix. The above code may need to be extended a bit for that.
You're trying to shove a scalar into an array:
my #row = $line;
I think what you really want is to split on spaces:
my #row = split / /, $line;
adding my #row =map [ split], $line to my initial code is helping to print the data to pwl.txt.
however its not printing side by side.. instead it is printing to new line.
I guess because of this Mr.Borodin didn't include matrix in side while loop!
1
2
3
4
5
6
7
8
9
Here's a simpler way of doing this. It assumes all the rows are of the same length, and that all the lines in the file contain data -- i.e. there are no blank lines
The name of the input file is expected as a parameter on the command line, and the output is sent to STDOUT so it can be redirected on the command line. For instance
perl transpose.pl op.txt > pwl.txt
use strict;
use warnings 'all';
my #matrix = map [ split ], <>;
print "#$_\n" for #matrix;
print "\n";
my #transpose;
for my $i ( 0 .. $#{ $matrix[0] } ) {
$transpose[$i] = [ map { $_->[$i] } #matrix ]
}
print "#$_\n" for #transpose;
print "\n";
output
1 2 3
4 5 6
7 8 9
1 4 7
2 5 8
3 6 9

Perl: Iterating through large hash, runs out of memory

I have been trying to find values that match between two columns (columns a and column b) of a large file and print the common values, plus the corresponding column d. I have been doing this by interating through hashes, however, because the file is so large, there is not enough memory to produce the output file. Is there any other way to do the same thing using less memory resources.
Any help is much appreciated.
The script I have written thus far is below:
#!usr/bin/perl
use warnings;
use strict;
open (FILE1, "<input.txt") || die "$!\n Couldn't open input.txt\n";
open (Output, ">output.txt")||die "Can't Open output.txt ";
my $hash1={};
my $hash2={};
while (<FILE1>) {
chomp (my $line=$_);
my ($a, $b, $c, $d) = split (/\t/, $line);
if ($a) {
$hash1->{$a}{info1} = "$d"; #original_ID-> YOB
}
if ($b) {
$hash2->{$b}{info2} = "$a"; #original_ID-> sire
}
foreach my $key (keys %$hash2) {
if (exists $hash1{$a}) {
$info1 = $hash1->{$a}->{info1};
print "$a\t$info1\n";
}
}
}
close FILE1;
close Output;
print "Done\n";
To clarify, the input file is a large pedigree file. An example is:
1 2 3 1977
2 4 5 1944
3 4 5 1950
4 5 6 1930
5 7 6 1928
An example of the output file is:
2 1944
4 1950
5 1928
Does the below work for you ?
#!/usr/local/bin/perl
use strict;
use warnings;
use DBM::Deep;
use List::MoreUtils qw(uniq);
my #seen;
my $db = DBM::Deep->new(
file => "foo.db",
autoflush => 1
);
while (<>) {
chomp;
my #fields = split /\s+/;
$$db{$fields[0]} = $fields[3];
push #seen, $fields[1];
}
for (uniq #seen) {
print $_ . " " . $$db{$_} . "\n" if exists $$db{$_};
}

parse a tab delimited data using perl

I have a tab delimited data. I want to process that data using perl. I am a newbie to perl and could not figure out how to solve .
This is sample table: My original file is almost a GB
gi|306963568|gb|GL429799.1|_1316857_1453052 13 1
gi|306963568|gb|GL429799.1|_1316857_1453052 14 1
gi|306963568|gb|GL429799.1|_1316857_1453052 15 1
gi|306963568|gb|GL429799.1|_1316857_1453052 16 1
gi|306963568|gb|GL429799.1|_1316857_1453052 17 1
gi|306963568|gb|GL429799.1|_1316857_1453052 360 1
gi|306963568|gb|GL429799.1|_1316857_1453052 361 1
gi|306963568|gb|GL429799.1|_1316857_1453052 362 1
gi|306963568|gb|GL429799.1|_1316857_1453052 363 1
gi|306963568|gb|GL429799.1|_1316857_1453052 364 1
gi|306963568|gb|GL429799.1|_1316857_1453052 365 1
gi|306963568|gb|GL429799.1|_1316857_1453052 366 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38640 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38641 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38642 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38643 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38644 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38645 1
I would like to get the output as
Name, start value, end value, average
gi|306963568|gb|GL429799.1|_1316857_1453052 13 17 1
gi|306963568|gb|GL429799.1|_1316857_1453052 360 366 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38640 38645 1
it will be great if someone could share their wisdom.
The general pattern is
use strict;
use warnings;
open my $fh, '<', 'myfile' or die $!;
while (<$fh>) {
chomp;
my #fields = split /\t/;
...
}
Within the loop the fields can be accessed as $fields[0] through $fields[2].
Update
I have understood your question better, and I think this solution will work for you. Note that it assumes the input data is sorted, as you have shown in your question.
It accumulates the start and end values, the total and the count in hash %data, and keeps a list of all the names encountered in #names so that the data can be displayed in the order it was read.
The program expects the input file name as a parameter on the command line.
You need to consider the formatting of the average because it is a floating point value. As it stands it will display the value to sixteen significant figures, and you may want to curtail that using sprintf.
use strict;
use warnings;
my ($filename) = #ARGV;
open my $fh, '<', $filename or die qq{Unable to open "$filename": $!};
my #names;
my %data;
my $current_name = '';
my $last_index;
while (<$fh>) {
chomp;
my ($name, $index, $value) = split /\t/;
if ( $current_name ne $name or $index > $last_index + 1 ) {
push #names, $name unless $data{$name};
push #{ $data{$name} }, {
start => $index,
count => 0,
total => 0,
};
$current_name = $name;
}
my $entry = $data{$name}[-1];
$entry->{end} = $index;
$entry->{count} += 1;
$entry->{total} += $value;
$last_index = $index;
}
for my $name (#names) {
for my $entry (#{ $data{$name} }) {
my ($start, $end, $total, $count) = #{$entry}{qw/ start end total count /};
print join("\t", $name, $start, $end, $total / $count), "\n";
}
}
output
gi|306963568|gb|GL429799.1|_1316857_1453052 13 17 1
gi|306963568|gb|GL429799.1|_1316857_1453052 360 366 1
gi|306963580|gb|GL429787.1|_4276355_4500645 38640 38645 1
This will produce the same output for the sample in your question:
#!/usr/bin/env perl -n
#
my ($name, $i, $value) = split(/\t/);
sub print_stats {
print join("\t", $prev_name, $start, $prev_i, $sum / ($prev_i - $start + 1)), "\n";
}
if ($prev_name eq $name && $i == $prev_i + 1) {
$sum += $value;
$prev_i = $i;
}
else {
if ($prev_name) {
&print_stats();
}
$start = $i;
$prev_name = $name;
$sum = $value;
$prev_i = $i;
}
END {
&print_stats();
}
Use it as:
./parser.pl < sample.txt
UPDATE: answers to the questions in comments:
To print output to a file, run like this: ./parser.pl < sample.txt > output.txt
$prev_name and $prev_i are NOT initialized, so they are undef at first (= NULL)
You could do something like this....
open (FILE, 'data.txt');
while (<FILE>) {
chomp;
($name, $start_value, $end_value, $average) = split("\t");
print "Name: $name\n";
print "Start Value: $start_value\n";
print "End Value: $End_Value\n";
print "Average: %average
print "---------\n";
}
close (FILE);
exit;
Those look like GenBank files...so I'm unsure where you are getting the start, end values, average.
Here's an example using Text::CSV:
use Text::CSV; # This will implicitly use Text::CSV_XS if it's installed
my $parser = Text::CSV->new( { sep_char => '|' } );
open my $fh, '<', 'myfile' or die $!;
while (my $row = $parser->getline($fh)) {
# $row references an array of field values from the line just read
}
Also, as a minor side detail, your sample data is delimited by pipe characters, not tabs, although that may just be to avoid copy/paste errors for those answering your question. If the actual data is tab-delimited, set sep_char to "\t" instead of '|'.

How can I organize this data using Perl?

I am new to Perl. I have an input file such as:
a 7 5
b 8 2
a 3 2
b 4 1
c 6 1
How can I get output like
column_1_val, number_occurrence_column_1, sum_of_column_2, sum_of_column_3
For example
a 2 10 7
b 2 12 3
c 1 6 1
The program below is a possible solution. I have used the DATA file handle whereas you will presumably need to open an external file and use the handle from that.
use strict;
use warnings;
use feature 'say';
my %data;
while (<DATA>) {
my ($key, #vals) = split;
$data{$key}[0]++;
my $i;
$data{$key}[++$i] += $_ for #vals;
}
say join ' ', $_, #{$data{$_}} for sort keys %data;
__DATA__
a 7 5
b 8 2
a 3 2
b 4 1
c 6 1
output
a 2 10 7
b 2 12 3
c 1 6 1
That would be something like (untested):
while (<>) {
if (m/(\w+)\s+(\d+)\s+(\d+)/) {
($n, $r1, $r2) = ($1, $2, $3);
$nr{$n}++;
$r1{$n} += $r1;
$r2{$n} += $r2;
}
}
for $n (sort keys %nr) {
print "$n $nr{$n} $r1{$n} $r2{$n}\n";
}
This is a very quick-and-dirty way of doing what you described, but it should get you on your way.
Even i am not aware of perl.But in case you are concerned with the result.the below is the solution in awk.It might /might not help you.but in case u need it :
awk '{c[$1]++;a[$1]=a[$1]+$2;b[$1]+=$3}END{for(i in a)print i,c[i],a[i],b[i]}' file3
A slightly different take:
my %records;
while ( <> ) {
my #cols = split ' ';
my $rec = $records{ $cols[0] } ||= {};
$rec->{number_occurrence_column_1}++;
$rec->{sum_of_column_2} += $cols[1];
$rec->{sum_of_column_3} += $cols[2];
}
foreach my $rec ( map { { col1 => $_, %{ $records{ $_ } } }
sort keys %records
) {
print join( "\t"
, #$rec{ qw<col1 number_occurrence_column_1
sum_of_column_2 sum_of_column_3
>
}
), "\n"
;
}