perl to loop and check across files - perl

still having trouble with perl programming and I need to be pushed to make a script work out.
I have two files and I want to use the list file to "extract" rows from the data one. The problem is that the list file is formatted as follow:
X1 A B
X2 C D
X3 E F
And my data looks like this:
A X1 2 5
B X1 3 7
C X2 1 4
D X2 1 5
I need to obtain the element pairs from the list file by which select the row in the data file. At the same time I would like to write an output like this:
X1 A B 2 5 3 7
X2 C D 1 4 1 5
I'm trying writing a perl code, but I'm not able to produce something useful. I'm at this point:
open (LIST, "< $fils_list") || die "impossibile open the list";
#list = <LIST>;
close (LIST);
open (HAN, "< $data") || die "Impossible open data";
#r = <HAN>;
close (HAN);
for ($p=0; $p<=$#list; $p++){
chomp ($list[$p]);
($x, $id1, $id2) = split (/\t/, $list[$p]);
$pair_one = $id1."\t".$x;
$pair_two = $id2."\t".$x;
for ($i=0; $i<=$#r; $i++){
chomp ($r[$i]);
($a, $b, $value1, $value2) = split (/\t/, $r[$i]);
$bench = $a."\t".$b;
if (($pair_one eq $bench) || ($pair_two eq $bench)){
print "I don't know what does this script must print!\n";
}
}
}
I'm not able to rationalize about what to print.
Any kind of suggestion is very welcome!

A few general recommendations:
Indent your code to show the structure of your program.
Use meaningful variable names, not $a or $value1 (if I do so below, this is due to my lack of domain knowledge).
Use data structures that suit your program.
Don't do operations like parsing a line more that once.
In Perl, every program should use strict; use warnings;.
use autodie for automatic error handling.
Also, use the open function like open my $fh, "<", $filename as this is safer.
Remember what I said about data structures? In the second file, you have entries like
A X1 2 5
This looks like a secondary key, a primary key, and some data columns. Key-value relationships are best expressed through a hash table.
use strict; use warnings; use autodie;
use feature 'say'; # available since 5.010
open my $data_fh, "<", $data;
my %data;
while (<$data_fh>) {
chomp; # remove newlines
my ($id2, $id1, #data) = split /\t/;
$data{$id1}{$id2} = \#data;
}
Now %data is a nested hash which we can use for easy lookups:
open my $list_fh, "<", $fils_list;
LINE: while(<$list_fh>) {
chomp;
my ($id1, #id2s) = split /\t/;
my $data_id1 = $data{$id1};
defined $data_id1 or next LINE; # maybe there isn't anything here. Then skip
my #values = map #{ $data_id1->{$_} }, #id2s; # map the 2nd level ids to their values and flatten the list
# now print everything out:
say join "\t", $id1, #id2s, #values;
}
The map function is a bit like a foreach loop, and builds a list of values. We need the #{ ... } here because the data structure doesn't hold arrays, but references to arrays. The #{ ... } is a dereference operator.

This is how i would do it, mostly using Hashes resp. Hash- and Array-References (test1.txt and test2.txt contain the data you provided in your example):
use strict;
use warnings;
open(my $f1, '<','test1.txt') or die "Cannot open file1: $!\n";
open(my $f2, '<','test2.txt') or die "Cannot open file2: $!\n";
my #data1 = <$f1>;
my #data2 = <$f2>;
close($f1);
close($f2);
chomp #data1;
chomp #data2;
my %result;
foreach my $line1 (#data1) {
my #fields1 = split(' ',$line1);
$result{$fields1[0]}->{$fields1[1]} = [];
$result{$fields1[0]}->{$fields1[2]} = [];
}
foreach my $line2 (#data2){
my #fields2 = split(' ',$line2);
push #{$result{$fields2[1]}->{$fields2[0]}}, $fields2[2];
push #{$result{$fields2[1]}->{$fields2[0]}}, $fields2[3];
}
foreach my $res (sort keys %result){
foreach (sort keys %{$result{$res}}){
print $res . " " . $_ . " " . join (" ", sort #{$result{$res}->{$_}}) . "\n";
}
}

Related

Perl: Read columns and convert to array

I am new to perl, trying to read a file with columns and creating an array.
I am having a file with following columns.
file.txt
A 15
A 20
A 33
B 20
B 45
C 32
C 78
I wanted to create an array for each unique item present in A with its values assigned from second column.
eg:
#A = (15,20,33)
#B = (20,45)
#C = (32,78)
Tried following code, only for printing 2 columns
use strict;
use warnings;
my $filename = $ARGV[0];
open(FILE, $filename) or die "Could not open file '$filename' $!";
my %seen;
while (<FILE>)
{
chomp;
my $line = $_;
my #elements = split (" ", $line);
my $row_name = join "\t", #elements[0,1];
print $row_name . "\n" if ! $seen{$row_name}++;
}
close FILE;
Thanks
Firstly some general Perl advice. These days, we like to use lexical variables as filehandles and pass three arguments to open().
open(my $fh, '<', $filename) or die "Could not open file '$filename' $!";
And then...
while (<$fh>) { ... }
But, given that you have your filename in $ARGV[0], another tip is to use an empty file input operator (<>) which will return data from the files named in #ARGV without you having to open them. So you can remove your open() line completely and replace the while with:
while (<>) { ... }
Second piece of advice - don't store this data in individual arrays. Far better to store it in a more complex data structure. I'd suggest a hash where the key is the letter and the value is an array containing all of the numbers matching that letter. This is surprisingly easy to build:
use strict;
use warnings;
use feature 'say';
my %data; # I'd give this a better name if I knew what your data was
while (<>) {
chomp;
my ($letter, $number) = split; # splits $_ on whitespace by default
push #{ $data{$letter} }, $number;
}
# Walk the hash to see what we've got
for (sort keys %data) {
say "$_ : #{ $data{$_ } }";
}
Change the loop to be something like:
while (my $line = <FILE>)
{
chomp($line);
my #elements = split (" ", $line);
push(#{$seen{$elements[0]}}, $elements[1]);
}
This will create/append a list of each item as it is found, and result in a hash where the keys are the left items, and the values are lists of the right items. You can then process or reassign the values as you wish.

How to find two matched ID in two files, and then use their values to calculate

I have two files as following:
FILE#1
A 20.68
B 17.5
C 15.6
D 20.6
E 27.6
FILE#2
C 16.7
X 2.9
E 7.0
A 15.2
First column is ID and second column is score. I am trying to find matched IDs in both files, and then use corresponding scores from FILE#1 calculate final score (Score2 - Score1) in FILE#2. The following is the result I want:
OUTPUT
C 1.1
E -20.6
A -5.48
Through following code, I could get matched ID, but I have no idea how to call corresponding scores from FILE#2 to do calculation in FILE#2. Your help will be greatly appreciated!
open my $A, 'list1.txt';
open my $B, 'list2.txt';
my $h;
map { chomp; $h{(split /\s+/)[0]} ++} <$A>;
while (<$B>) {
my #split = split(/\s+/,$_);
my $ID = $split[0];
my $score = $split[1];
print "$ID\t$score\n" if $h{$ID};
}
You just need to load your first file into a hash of key value pairs. Then when you iterate on the second file, you can test if each key exists in the previous file.
The following script opens file handles to strings to test the logic. But you can easily revert back to opening up the files for your live script.
use strict;
use warnings;
use autodie;
my %score1 = do {
#open my $fh1, '<', 'list1.txt';
open my $fh1, '<', \ "A 20.68\nB 17.5\nC 15.6\nD 20.6\nE 27.6\n";
map {chomp; split ' ', $_, 2} <$fh1>;
};
#open my $fh2, '<', 'list2.txt';
open my $fh2, '<', \ "C 16.7\nX 2.9\nE 7.0\nA 15.2";
while (<$fh2>) {
chomp;
my ($key, $score) = split ' ';
printf "%s %s\n", $key, $score - $score1{$key} if exists $score1{$key};
}
Outputs:
C 1.1
E -20.6
A -5.48

subroutine in perl to perform over #ARGV entries

I've got a small programme that basically processes lists of blast hits, and checks to see if there's overlap between the blast results by iterating blast results (as hash key) through hashes containing each blast list.
This involves processing each blast input file as $ARGV in the same way. Depending on what I'm trying to achieve, I might want to compare 2, 3 or 4 blast lists for gene overlap. I want to know how I can write the basic processing block as a subroutine that I can iterate over for however many $ARGV arguments exist.
For example, the below works fine if I input 2 blast lists:
#!/usr/bin/perl -w
use strict;
use File::Slurp;
use Data::Dumper;
$Data::Dumper::Sortkeys = 1;
if ($#ARGV != 1){
die "Usage: intersect.pl <de gene list 1><de gene list 2>\n"
}
my $input1 = $ARGV[0];
open my $blast1, '<', $input1 or die $!;
my $results1 = 0;
my (#blast1ID, #blast1_info, #split);
while (<$blast1>) {
chomp;
#split = split('\t');
push #blast1_info, $split[0];
push #blast1ID, $split[2];
$results1++;
}
print "$results1 blast hits in $input1\n";
my %blast1;
push #{$blast1{$blast1ID[$_]} }, [ $blast1_info[$_] ] for 0 .. $#blast1ID;
#print Dumper (\%blast1);
my $input2 = $ARGV[1];
open my $blast2, '<', $input2 or die $!;
my $results2 = 0;
my (#blast2ID, #blast2_info);
while (<$blast2>) {
chomp;
#split = split('\t');
push #blast2_info, $split[0];
push #blast2ID, $split[2];
$results2++;
}
my %blast2;
push #{$blast2{$blast2ID[$_]} }, [ $blast2_info[$_] ] for 0 .. $#blast2ID;
#print Dumper (\%blast2);
print "$results2 blast hits in $input2\n";
But I would like to be able to adjust it to cater for 3 or 4 blast lists inputs. I imagine a sub routine would work best for this, that is invoked for each input, and might look something like this:
sub process {
my $input$i = $ARGV[$i-1];
open my $blast$i, '<', $input[$i] or die $!;
my $results$i = 0;
my (#blast$iID, #blast$i_info, #split);
while (<$blast$i>) {
chomp;
#split = split('\t');
push #blast$i_info, $split[0];
push #blast$iID, $split[2];
$results$i++;
}
print "$results$i blast hits in $input$i\n";
print Dumper (\#blast$i_info);
print Dumper (\#blast$iID);
# Call sub 'process for every ARGV...
&process for 0 .. $#ARGV;
UPDATE:
I've removed the hash part for the last snippet.
The resultant data structure should be:
4 blast hits in <$input$i>
$VAR1 = [
'TCONS_00001332(XLOC_000827),_4.60257:9.53943,_Change:1.05146,_p:0.03605,_q:0.998852',
'TCONS_00001348(XLOC_000833),_0.569771:6.50403,_Change:3.51288,_p:0.0331,_q:0.998852',
'TCONS_00001355(XLOC_000837),_10.8634:24.3785,_Change:1.16613,_p:0.001,_q:0.998852',
'TCONS_00002204(XLOC_001374),_0.316322:5.32111,_Change:4.07226,_p:0.00485,_q:0.998852',
];
$VAR1 = [
'gi|50418055|gb|BC078036.1|_Xenopus_laevis_cDNA_clone_MGC:82763_IMAGE:5156829,_complete_cds',
'gi|283799550|emb|FN550108.1|_Xenopus_(Silurana)_tropicalis_mRNA_for_alpha-2,3-sialyltransferase_ST3Gal_V_(st3gal5_gene)',
'gi|147903202|ref|NM_001097651.1|_Xenopus_laevis_forkhead_box_I4,_gene_1_(foxi4.1),_mRNA',
'gi|2598062|emb|AJ001730.1|_Xenopus_laevis_mRNA_for_Xsox17-alpha_protein',
];
And the input:
TCONS_00001332(XLOC_000827),_4.60257:9.53943,_Change:1.05146,_p:0.03605,_q:0.998852 0.0 gi|50418055|gb|BC078036.1|_Xenopus_laevis_cDNA_clone_MGC:82763_IMAGE:5156829,_complete_cds
TCONS_00001348(XLOC_000833),_0.569771:6.50403,_Change:3.51288,_p:0.0331,_q:0.998852 0.0 gi|283799550|emb|FN550108.1|_Xenopus_(Silurana)_tropicalis_mRNA_for_alpha-2,3-sialyltransferase_ST3Gal_V_(st3gal5_gene)
TCONS_00001355(XLOC_000837),_10.8634:24.3785,_Change:1.16613,_p:0.001,_q:0.998852 0.0 gi|147903202|ref|NM_001097651.1|_Xenopus_laevis_forkhead_box_I4,_gene_1_(foxi4.1),_mRNA
TCONS_00002204(XLOC_001374),_0.316322:5.32111,_Change:4.07226,_p:0.00485,_q:0.998852 0.0 gi|2598062|emb|AJ001730.1|_Xenopus_laevis_mRNA_for_Xsox17-alpha_protein
You can't inject a variable value in the middle of a variable name. (Well, you can but you shouldn't. Even then you and can't use array indexing in the middle of the name.)
These names aren't valid:
#blast[$i]_info
#blast[$i]_ID
You need to move the index to the end:
#blast_info[$i]
#blast_ID[$i]
That said, I'd get rid of the arrays completely and use a hash instead.
Your second code snippet doesn't show a call to your subroutine. Unless it's explicitly called it will never run and your program will do nothing. I'd modify the process sub to take a single argument and call it for each element of #ARGV. e.g.
process($_) foreach #ARGV;
Here's how I'd write your program:
use strict;
use warnings;
use Data::Dumper;
my #blast;
push #blast, process($_) foreach #ARGV;
print Dumper(\#blast);
sub process {
my $file = shift;
open my $fh, '<', $file or die "Can't read file '$file' [$!]\n";
my %data;
while (<$fh>) {
chomp;
my ($id, undef, $info) = split '\t';
$data{$id} = $info;
}
return \%data;
}
It isn't quite clear what your resulting data structure should look like. (I took my best guess.) I recommend reading perlreftut to gain a better basic understanding of references and using them to build data structures in Perl.

Multiple values in a hash for a single key

I have two files.
One composed of a unique list while the other one is a redundant list of name with the age.
for example
File1: File2:
Gaia Gaia 3
Matt Matt 12
Jane Gaia 89
Reuben 4
My aim is to match File1 and File2 and to retrieve the highest age for each name.
So far I have written the below code.
The bit that do not work quite well is: when the same key is found in the hash, print the bigger value.
Any suggestion/comment is welcome!
Thanks!!
#!/usr/bin/perl -w
use strict;
open (FILE1, $ARGV[0] )|| die "unable to open arg1\n"; #Opens first file for comparison
open (FILE2, $ARGV[1])|| die "unable to open arg2\n"; #2nd for comparison
my #not_red = <FILE1>;
my #exonslength = <FILE2>;
#2) Produce an Hash of File2. If the key is already in the hash, keep the couple key- value with the highest value. Otherwise, next.
my %hash_doc2;
my #split_exons;
my $key;
my $value;
foreach my $line (#exonslength) {
#split_exons = split "\t", $line;
#hash_doc2 {$split_exons[0]} = ($split_exons[1]);
if (exists $hash_doc2{$split_exons[0]}) {
if ( $hash_doc2{$split_exons[0]} > values %hash_doc2) {
$hash_doc2{$split_exons[0]} = ($split_exons[1]);
} else {next;}
}
}
#3) grep the non redundant list of gene from the hash with the corresponding value
my #a = grep (#not_red,%hash_doc2);
print "#a\n";
Do you need to keep all the values? If not, you can only keep the max value:
#split_exons = split "\t", $line;
if (exists $hash_doc2{$slit_exons[0]}
and $hash_doc2{$slit_exons[0]} < $split_exons[1]) {
$hash_doc2{$split_exons[0]} = $split_exons[1];
}
You code does not keep all the values, either. You cannot store an array into a hash value, you have to store a reference. Adding a new value to an array can by done by push:
push #{ $hash_doc2{$split_exons[0]} }, $split_exons[1];
Your use of numeric comparison against values is also not doing what you think. The < operator imposes a scalar context, so values returns the number of values. Another option would be to store the values sorted and always ask for the highest value:
$hash_doc2{$split_exons[0]} = [ sort #{ $hash_doc2{$split_exons[0]} }, $split_exons[1] ];
# max for $x is at $hash_doc2{$x}[-1]
Instead of reading in the whole of file2 into an array (which will be bad if it's big), you could loop through and process the data file line by line:
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
use Data::Dumper;
open( my $nameFh, '<', $ARGV[0]);
open( my $dataFh, '<', $ARGV[1]);
my $dataHash = {};
my $processedHash = {};
while(<$dataFh>){
chomp;
my ( $name, $age ) = split /\s+/, $_;
if(! defined($dataHash->{$name}) or $dataHash->{$name} < $age ){
$dataHash->{$name} = $age
}
}
while(<$nameFh>){
chomp;
$processedHash->{$_} = $dataHash->{$_} if defined $dataHash->{$_};
}
print Dumper($processedHash);

Perl merging 2 csv files line by line with a primary key

Edit: solution added.
Hi, I currently have some working albeit slow code.
It merges 2 CSV files line by line using a primary key.
For example, if file 1 has the line:
"one,two,,four,42"
and file 2 has this line;
"one,,three,,42"
where in 0 indexed $position = 4 has the primary key = 42;
then the sub: merge_file($file1,$file2,$outputfile,$position);
will output a file with the line:
"one,two,three,four,42";
Every primary key is unique in each file, and a key might exist in one file but not in the other (and vice versa)
There are about 1 million lines in each file.
Going through every line in the first file, I am using a hash to store the primary key, and storing the line number as the value. The line number corresponds to an array[line num] which stores every line in the first file.
Then I go through every line in the second file, and check if the primary key is in the hash, and if it is, get the line from the file1array and then add the columns I need from the first array to the second array, and then concat. to the end. Then delete the hash, and then at the very end, dump the entire thing to file. (I am using a SSD so I want to minimise file writes.)
It is probably best explained with a code:
sub merge_file2{
my ($file1,$file2,$out,$position) = ($_[0],$_[1],$_[2],$_[3]);
print "merging: \n$file1 and \n$file2, to: \n$out\n";
my $OUTSTRING = undef;
my %line_for;
my #file1array;
open FILE1, "<$file1";
print "$file1 opened\n";
while (<FILE1>){
chomp;
$line_for{read_csv_string($_,$position)}=$.; #reads csv line at current position (of key)
$file1array[$.] = $_; #store line in file1array.
}
close FILE1;
print "$file2 opened - merging..\n";
open FILE2, "<", $file2;
my #from1to2 = qw( 2 4 8 17 18 19); #which columns from file 1 to be added into cols. of file 2.
while (<FILE2>){
print "$.\n" if ($.%1000) == 0;
chomp;
my #array1 = ();
my #array2 = ();
my #array2 = split /,/, $_; #split 2nd csv line by commas
my #array1 = split /,/, $file1array[$line_for{$array2[$position]}];
# ^ ^ ^
# prev line lookup line in 1st file,lookup hash, pos of key
#my #output = &merge_string(\#array1,\#array2); #merge 2 csv strings (old fn.)
foreach(#from1to2){
$array2[$_] = $array1[$_];
}
my $outstring = join ",", #array2;
$OUTSTRING.=$outstring."\n";
delete $line_for{$array2[$position]};
}
close FILE2;
print "adding rest of lines\n";
foreach my $key (sort { $a <=> $b } keys %line_for){
$OUTSTRING.= $file1array[$line_for{$key}]."\n";
}
print "writing file $out\n\n\n";
write_line($out,$OUTSTRING);
}
The first while is fine, takes less than 1 minute, however the second while loop takes about 1 hour to run, and I am wondering if I have taken the right approach. I think it is possible for a lot of speedup? :) Thanks in advance.
Solution:
sub merge_file3{
my ($file1,$file2,$out,$position,$hsize) = ($_[0],$_[1],$_[2],$_[3],$_[4]);
print "merging: \n$file1 and \n$file2, to: \n$out\n";
my $OUTSTRING = undef;
my $header;
my (#file1,#file2);
open FILE1, "<$file1" or die;
while (<FILE1>){
if ($.==1){
$header = $_;
next;
}
print "$.\n" if ($.%100000) == 0;
chomp;
push #file1, [split ',', $_];
}
close FILE1;
open FILE2, "<$file2" or die;
while (<FILE2>){
next if $.==1;
print "$.\n" if ($.%100000) == 0;
chomp;
push #file2, [split ',', $_];
}
close FILE2;
print "sorting files\n";
my #sortedf1 = sort {$a->[$position] <=> $b->[$position]} #file1;
my #sortedf2 = sort {$a->[$position] <=> $b->[$position]} #file2;
print "sorted\n";
#file1 = undef;
#file2 = undef;
#foreach my $line (#file1){print "\t [ #$line ],\n"; }
my ($i,$j) = (0,0);
while ($i < $#sortedf1 and $j < $#sortedf2){
my $key1 = $sortedf1[$i][$position];
my $key2 = $sortedf2[$j][$position];
if ($key1 eq $key2){
foreach(0..$hsize){ #header size.
$sortedf2[$j][$_] = $sortedf1[$i][$_] if $sortedf1[$i][$_] ne undef;
}
$i++;
$j++;
}
elsif ( $key1 < $key2){
push(#sortedf2,[#{$sortedf1[$i]}]);
$i++;
}
elsif ( $key1 > $key2){
$j++;
}
}
#foreach my $line (#sortedf2){print "\t [ #$line ],\n"; }
print "outputting to file\n";
open OUT, ">$out";
print OUT $header;
foreach(#sortedf2){
print OUT (join ",", #{$_})."\n";
}
close OUT;
}
Thanks everyone, the solution is posted above. It now takes about 1 minute to merge the whole thing! :)
Two techniques come to mind.
Read the data from the CSV files into two tables in a DBMS (SQLite would work just fine), and then use the DB to do a join and write the data back out to CSV. The database will use indexes to optimize the join.
First, sort each file by primary key (using perl or unix sort), then do a linear scan over each file in parallel (read a record from each file; if the keys are equal then output a joined row and advance both files; if the keys are unequal then advance the file with the lesser key and try again). This step is O(n + m) time instead of O(n * m), and O(1) memory.
What's killing the performance is this code, which is concatenating millions of times.
$OUTSTRING.=$outstring."\n";
....
foreach my $key (sort { $a <=> $b } keys %line_for){
$OUTSTRING.= $file1array[$line_for{$key}]."\n";
}
If you want to write to the output file only once, accumulate your results in an array, and then print them at the very end, using join. Or, even better perhaps, include the newlines in the results and write the array directly.
To see how concatenation does not scale when crunching big data, experiment with this demo script. When you run it in concat mode, things start slowing down considerably after a couple hundred thousand concatenations -- I gave up and killed the script. By contrast, simply printing an array of a million lines took less than a than a minute on my machine.
# Usage: perl demo.pl 50 999999 concat|join|direct
use strict;
use warnings;
my ($line_len, $n_lines, $method) = #ARGV;
my #data = map { '_' x $line_len . "\n" } 1 .. $n_lines;
open my $fh, '>', 'output.txt' or die $!;
if ($method eq 'concat'){ # Dog slow. Gets slower as #data gets big.
my $outstring;
for my $i (0 .. $#data){
print STDERR $i, "\n" if $i % 1000 == 0;
$outstring .= $data[$i];
}
print $fh $outstring;
}
elsif ($method eq 'join'){ # Fast
print $fh join('', #data);
}
else { # Fast
print $fh #data;
}
If you want merge you should really merge. First of all you have to sort your data by key and than merge! You will beat even MySQL in performance. I have a lot of experience with it.
You can write something along those lines:
#!/usr/bin/env perl
use strict;
use warnings;
use Text::CSV_XS;
use autodie;
use constant KEYPOS => 4;
die "Insufficient number of parameters" if #ARGV < 2;
my $csv = Text::CSV_XS->new( { eol => $/ } );
my $sortpos = KEYPOS + 1;
open my $file1, "sort -n -k$sortpos -t, $ARGV[0] |";
open my $file2, "sort -n -k$sortpos -t, $ARGV[1] |";
my $row1 = $csv->getline($file1);
my $row2 = $csv->getline($file2);
while ( $row1 and $row2 ) {
my $row;
if ( $row1->[KEYPOS] == $row2->[KEYPOS] ) { # merge rows
$row = [ map { $row1->[$_] || $row2->[$_] } 0 .. $#$row1 ];
$row1 = $csv->getline($file1);
$row2 = $csv->getline($file2);
}
elsif ( $row1->[KEYPOS] < $row2->[KEYPOS] ) {
$row = $row1;
$row1 = $csv->getline($file1);
}
else {
$row = $row2;
$row2 = $csv->getline($file2);
}
$csv->print( *STDOUT, $row );
}
# flush possible tail
while ( $row1 ) {
$csv->print( *STDOUT, $row1 );
$row1 = $csv->getline($file1);
}
while ( $row2 ) {
$csv->print( *STDOUT, $row2 );
$row2 = $csv->getline($file1);
}
close $file1;
close $file2;
Redirect output to file and measure.
If you like more sanity around sort arguments you can replace file opening part with
(open my $file1, '-|') || exec('sort', '-n', "-k$sortpos", '-t,', $ARGV[0]);
(open my $file2, '-|') || exec('sort', '-n', "-k$sortpos", '-t,', $ARGV[1]);
I can't see anything that strikes me as obviously slow, but I would make these changes:
First, I'd eliminate the #file1array variable. You don't need it; just store the line itself in the hash:
while (<FILE1>){
chomp;
$line_for{read_csv_string($_,$position)}=$_;
}
Secondly, although this shouldn't really make much of a difference with perl, I wouldn't add to $OUTSTRING all the time. Instead, keep an array of output lines and push onto it each time. If for some reason you still need to call write_line with a massive string you can always use join('', #OUTLINES) at the end.
If write_line doesn't use syswrite or something low-level like that, but rather uses print or other stdio-based calls, then you aren't saving any disk writes by building up the output file in memory. Therefore, you might as well not build your output up in memory at all, and instead just write it out as you create it. Of course if you are using syswrite, forget this.
Since nothing is obviously slow, try throwing Devel::SmallProf at your code. I've found that to be the best perl profiler for producing those "Oh! That's the slow line!" insights.
Assuming around 20 bytes lines each of your file would amount to about 20 MB, which isn't too big.
Since you are using hash your time complexity doesn't seem to be a problem.
In your second loop, you are printing to the console for each line, this bit is slow. Try removing that should help a lot.
You can also avoid the delete in the second loop.
Reading multiple lines at a time should also help. But not too much I think, there is always going to be a read ahead behind the scenes.
I'd store each record in a hash whose keys are the primary keys. A given primary key's value is a reference to an array of CSV values, where undef represents an unknown value.
use 5.10.0; # for // ("defined-or")
use Carp;
use Text::CSV;
sub merge_csv {
my($path,$record) = #_;
open my $fh, "<", $path or croak "$0: open $path: $!";
my $csv = Text::CSV->new;
local $_;
while (<$fh>) {
if ($csv->parse($_)) {
my #f = map length($_) ? $_ : undef, $csv->fields;
next unless #f >= 1;
my $primary = pop #f;
if ($record->{$primary}) {
$record->{$primary}[$_] //= $f[$_]
for 0 .. $#{ $record->{$primary} };
}
else {
$record->{$primary} = \#f;
}
}
else {
warn "$0: $path:$.: parse failed; skipping...\n";
next;
}
}
}
Your main program will resemble
my %rec;
merge_csv $_, \%rec for qw/ file1 file2 /;
The Data::Dumper module shows that the resulting hash given the simple inputs from your question is
$VAR1 = {
'42' => [
'one',
'two',
'three',
'four'
]
};