how to combine the code to make the output is on the same line? - perl

Can you help me to combine both of these progeam to display the output in a row with two columns? The first column is for $1 and the second column is $2.
Kindly help me to solve this. Thank you :)
This is my code 1.
#!/usr/local/bin/perl
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Timing Path Group \'(\S+)\'/) {
$line = $1;
print ("$1\n");
}
}
close (FILE);
This is my code 2.
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Levels of Logic:\s+(\S+)/) {
$line = $1;
print ("$1\n");
}
}
close (FILE);
this is my output for code 1 which contain 26 line of data:
**async_default**
**clock_gating_default**
Ddia_link_clk
Ddib_link_clk
Ddic_link_clk
Ddid_link_clk
FEEDTHROUGH
INPUTS
Lclk
OUTPUTS
VISA_HIP_visa_tcss_2000
ckpll_npk_npkclk
clstr_fscan_scanclk_pulsegen
clstr_fscan_scanclk_pulsegen_notdiv
clstr_fscan_scanclk_wavegen
idvfreqA
idvfreqB
psf5_primclk
sb_nondet4tclk
sb_nondetl2tclk
sb_nondett2lclk
sbclk_nondet
sbclk_sa_det
stfclk_scan
tap4tclk
tapclk
The output code 1 also has same number of line.

paste is useful for this: assuming your shell is bash, then using process substitutions
paste <(perl script1.pl) <(perl script2.pl)
That emits columns separated by a tab character. For prettier output, you can pipe the output of paste to column
paste <(perl script1.pl) <(perl script2.pl) | column -t -s $'\t'
And with this, you con't need to try and "merge" your perl programs.

To combine the two scripts and to output two items of data on the same line, you need to hold on until the end of the file (or until you have both data items) and then output them at once. So you need to combine both loops into one:
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
my( $levels, $timing );
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Levels of Logic:\s+(\S+)/) {
$levels = $1;
}
if ($line=~ m/^\s+Timing Path Group \'(\S+)\'/) {
$timing = $1;
}
}
print "$levels, $timing\n";
close (FILE);

You still haven't given us one vital piece of information - what does the input data looks like. Most importantly, are the two pieces of information you're looking for on the same line?
[Update: Looking more closely at your regexes, I see it's possible for both pieces of information to be on the same line - as they are both supposed to be the first item on the line. It would be helpful if you were clearer about that in your question.]
I think this will do the right thing, no matter what the answer to your question is. I've also added the improvements I suggested in my answer to your previous question:
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $zipped = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $unzipped = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $zipped => $unzipped
or die "gunzip failed: $GunzipError\n";
open (my $fh, '<', $unzipped) or die "Cannot open '$unzipped': $!\n";
my ($levels, $timing);
while (<$fh>) {
chomp;
if (m/^\s+Levels of Logic:\s+(\S+)/) {
$levels = $1;
}
if (m/^\s+Timing Path Group \'(\S+)\'/) {
$timing = $1;
}
# If we have both values, then print them out and
# set the variables to 'undef' for the next iteration
if ($levels and $timing) {
print "$levels, $timing\n";
undef $levels;
undef $timing;
}
}
close ($fh);

Related

Write If Statement Variable to New File

I am trying to send a variable that is defined in an if statement $abc to a new file. The code seems correct but, I know that it is not working because the file is not being created.
Data File Sample:
bos,control,x1,x2,29AUG2016,y1,y2,76.4
bos,control,x2,x3,30AUG2016,y2,y3,78.9
bos,control,x3,x4,01SEP2016,y3,y4,72.5
bos,control,x4,x5,02SEP2016,y4,y5,80.5
Perl Code:
#!/usr/bin/perl
use strict;
use warnings 'all';
use POSIX qw(strftime); #Pull in date
my $currdate = strftime( "%Y%m%d", localtime ); #Date in YYYYMMDD format
my $modded = strftime( "%d%b%Y", localtime ); #Date in DDMONYYYY format
my $newdate = uc $modded; #converts lowercase to uppercase
my $filename = '/home/.../.../text_file'; #Define full file path before opening
open(FILE, '<', $filename) or die "Uh, where's the file again?\n"; #Open file else give up and relay snarky error
while(<FILE>) #Open While Loop
{
chomp;
my #fields = split(',' , $_); #Identify columns
my $site = $fields[0];
my $var1 = $fields[1];
my $var2 = $fields[4];
my $var3 = $fields[7];
my $abc = print "$var1,$var2,$var3\n" if ($var1 =~ "control" && $var2 =~ "$newdate");
open my $abc, '>', '/home/.../.../newfile.txt';
close $abc;
}
close FILE;
In your code you have a few odd things that are likely mistakes.
my $abc = print "$var1,$var2,$var3\n" if ($var1 =~ "c01" && $var2 =~ "$newdate");
print will return success, which it does as 1. So you will print out the string to STDOUT, and then assign 1 to a new lexical variable $abc. $abc is now 1.
All of that only happens if that condition is met. Don't do conditional assignments. The behavior for this is undefined. So if the condition is false, your $abc might be undef. Or something else. Who knows?
open my $abc, '>', '/home/.../.../newfile.txt';
close $abc;
You are opening a new filehandle called $abc. The my will redeclare it. That's a warning that you would get if you had use warnings in your code. It also overwrites your old $abc with a new file handle object.
You don't write anything to the file
... are weird foldernames, but that's probably just obfuscation for your example
I think what you actually want to do is this:
use strict;
use warnings 'all';
# ...
open my $fh, '<', $filename or die $!;
while ( my $line = <$fh> ) {
chomp $line;
my #fields = split( ',', $line );
my $site = $fields[0];
my $var1 = $fields[1];
my $var2 = $fields[4];
my $var3 = $fields[7];
open my $fh_out, '>', '/home/.../.../newfile.txt';
print $fh_out "$var1,$var2,$var3\n" if ( $var1 =~ "c01" && $var2 =~ "$newdate" );
close $fh_out;
}
close $fh;
You don't need the $abc variable in between at all. You can just print to your new file handle $fh_out that's open for writing.
Note that you will overwrite the newfile.txt file every time you have a match in a line inside $filename.
Your current code:
Prints the string
Assigns the result of printing it to a variable
Immediately overwrites that variable with a file handle (assuming open succeeded)
Closes that file handle without using it
Your logic should look more like this:
if ( $var1 =~ "c01" && $var2 =~ "$newdate" ) {
my $abc = "$var1,$var2,$var3\n"
open (my $file, '>', '/home/.../.../newfile.txt') || die("Could not open file: " . $!);
print $file $abc;
close $file;
}
You have a number of problems with your code. In addition to what others have mentioned
You create a new output file every time you find a matching input line. That will leave the file containing only the last printed string
Your test checks whether the text in the second column contains c01, but all of the lines in your sample input have control in the second column, so nothing will be printed
I'm guessing that you want to test for string equality, in which case you need eq instead of =~ which does a regular expression pattern match
I think it should look something more like this
use strict;
use warnings 'all';
use POSIX 'strftime';
my $currdate = uc strftime '%d%b%Y', localtime;
my ($input, $output) = qw/ data.txt newfile.txt /;
open my $fh, '<', $input or die qq{Unable to open "$input" for input: $!};
open my $out_fh, '>', $output or die qq{Unable to open "$output" for output: $!};
while ( <$fh> ) {
chomp;
my #fields = split /,/;
my ($site, $var1, $var2, $var3) = #fields[0,1,4,7];
next unless $var1 eq 'c01' and $var2 eq $currdate;
print $out_fh "$var1,$var2,$var3\n";
}
close $out_fh or die $!;

How to use text output from one perl script as input to another perl script? I seem to be able run this as two separate scripts but not as one

I've got a script that reformats an input file and creates an output file. When I try to read that output file for the second part of the script, it doesn't work. However if I split the script into two parts it works fine and gives me the output that I need. I'm not a programmer and surprised I've got this far - I've been banging my head for days trying to resolve this.
My command for running it is this (BTW the temp.txt was just a brute force workaround for getting rid of the final comma to get my final output file - couldn't find another solution):
c:\perl\bin\perl merge.pl F146.sel temp.txt F146H.txt
Input looks like this (from another software package) ("F146.sel"):
/ Selected holes from the .\Mag_F146_Trimmed.gdb database.
"L12260"
"L12270"
"L12280"
"L12290"
Output looks like this (mods to the text: quotes removed, insert comma, concatenate into one line, remove the last comma) "F146H.txt":
L12260,L12270,L12280,L12290
Then I want to use this as input in the next part of the script, which basically inserts this output into a line of code that I can use in another software package (my "merge.gs" file). This is the output that I get if I split my script into two parts, but it just gives me a blank if I do it as one (see below).
CURRENT Database,"RAD_F146.gdb"
SETINI MERGLINE.OUT="DALL"
SETINI MERGLINE.LINES="L12260,L12270,L12280,L12290"
GX mergline.gx
What follows is my "merge.pl". What have I done wrong?
(actually, the question could be - what haven't I done wrong, as this is probably the most retarded code you've seen in a while. In fact, I bet some of you could get this entire operation done in 10-15 lines of code, instead of my butchered 90. Thanks in advance.)
# this reformats the SEL file to remove the first line and replace the " with nothing
$file = shift ;
$temp = shift ;
$linesH = shift ;
#open (Profiles, ">.\\scripts\\P2.gs")||die "couldn't open output .gs file";
open my $in, '<', $file or die "Can't read old file: Inappropriate I/O control operation";
open my $out, '>', $temp or die "Can't write new file: Inappropriate I/O control operation";
my $firstLine = 1;
while( <$in> )
{
if($firstLine)
{
$firstLine = 0;
}
else{
s/"L/L/g; # replace "L with L
s/"/,/g; # replace " with,
s|\s+||; # concatenates it all into one line
print $out $_;
}
}
close $out;
open (part1, "${temp}")||die "Couldn't open selection file";
open (part2, ">${linesH}")||die "Couldn't open selection file";
printitChomp();
sub printitChomp
{
print part2 <<ENDGS;
ENDGS
}
while ($temp = <part1> )
{
print $temp;
printit();
}
sub printit
{$string = substr (${temp}, 0,-1);
print part2 <<ENDGS;
$string
ENDGS
}
####Theoretically this creates the merge script from the output
####file from the previous loop. However it only seems to work
####if I split this into 2 perl scripts.
open (MergeScript, ">MergeScript.gs")||die "couldn't open output .gs file";
printitMerge();
open (SEL, "${linesH}")||die "Couldn't open selection file";
sub printitMerge
#open .sel file
{
print MergeScript <<ENDGS;
ENDGS
}
#iterate over required files
while ( $line = <SEL> ){
chomp $line;
print STDOUT $line;
printitLines();
}
sub printitLines
{
print MergeScript <<ENDGS;
CURRENT Database,"RAD_F146.gdb"
SETINI MERGLINE.OUT="DALL"
SETINI MERGLINE.LINES="${line}"
GX mergline.gx
ENDGS
}
so I think all you were really missing was close(part2); to allow it to be reopened as SEL..
#!/usr/bin/env perl
use strict;
use warnings;
# this reformats the SEL file to remove the first line and replace the " with nothing
my $file = shift;
my $temp = shift;
my $linesH = shift;
open my $in, '<', $file or die "Can't read old file: Inappropriate I/O control operation";
open my $out, '>', $temp or die "Can't write new file: Inappropriate I/O control operation";
my $firstLine = 1;
while (my $line = <$in>){
print "LINE: $line\n";
if ($firstLine){
$firstLine = 0;
} else {
$line =~ s/"L/L/g; # replace "L with L
$line =~ s/"/,/g; # replace " with,
$line =~ s/\s+//g; # concatenates it all into one line
print $out $line;
}
}
close $out;
open (part1, $temp) || die "Couldn't open selection file";
open (part2, ">", $linesH) || die "Couldn't open selection file";
while (my $temp_line = <part1>){
print "TEMPLINE: $temp_line\n";
my $string = substr($temp_line, 0, -1);
print part2 <<ENDGS;
$string
ENDGS
}
close(part2);
#### this creates the merge script from the output
#### file from the previous loop.
open (MergeScript, ">MergeScript.gs")||die "couldn't open output .gs file";
open (SEL, $linesH) || die "Couldn't open selection file";
#iterate over required files
while ( my $sel_line = <SEL> ){
chomp $sel_line;
print STDOUT $sel_line;
print MergeScript <<"ENDGS";
CURRENT Database,"RAD_F146.gdb"
SETINI MERGLINE.OUT="DALL"
SETINI MERGLINE.LINES="$sel_line"
GX mergline.gx
ENDGS
}
and one alternative way of doing it..
#!/usr/bin/env perl
use strict;
use warnings;
my $file = shift;
open my $in, '<', $file or die "Can't read old file: Inappropriate I/O control operation";
my #lines = <$in>; # read in all the lines
shift #lines; # discard the first line
my $line = join(',', #lines); # join the lines with commas
$line =~ s/[\r\n"]+//g; # remove the quotes and newlines
# print the line into the mergescript
open (MergeScript, ">MergeScript.gs")||die "couldn't open output .gs file";
print MergeScript <<"ENDGS";
CURRENT Database,"RAD_F146.gdb"
SETINI MERGLINE.OUT="DALL"
SETINI MERGLINE.LINES="$line"
GX mergline.gx
ENDGS

How to read from a file and direct output to a file if a file name is given in the command line, and printing to console if no argument given

I made a file, "rootfile", that contains paths to certain files and the perl program mymd5.perl gets the md5sum for each file and prints it in a certain order. How do I redirect the output to a file if a name is given in the command line? For instance if I do
perl mymd5.perl md5file
then it will feed output to md5file. And if I just do
perl mydm5.perl
it will just print to the console.
This is my rootfile:
/usr/local/courses/cs3423/assign8/cmdscan.c
/usr/local/courses/cs3423/assign8/driver.c
/usr/local/courses/cs3423/assign1/xpostitplus-2.3-3.diff.gz
This is my program right now:
open($in, "rootfile") or die "Can't open rootfile: $!";
$flag = 0;
if ($ARGV[0]){
open($out,$ARGV[0]) or die "Can't open $ARGV[0]: $!";
$flag = 1;
}
if ($flag == 1) {
select $out;
}
while ($line = <$in>) {
$md5line = `md5sum $line`;
#md5arr = split(" ",$md5line);
if ($flag == 0) {
printf("%s\t%s\n",$md5arr[1],$md5arr[0]);
}
}
close($out);
If you don't give a FILEHANDLE to print or printf, the output will go to STDOUT (the console).
There are several way you can redirect the output of your print statements.
select $out; #everything you print after this line will go the file specified by the filehandle $out.
... #your print statements come here.
close $out; #close connection when done to avoid counfusing the rest of the program.
#or you can use the filehandle right after the print statement as in:
print $out "Hello World!\n";
You can print a filename influenced by the value in #ARGV as follows:
This will take the name of the file in $ARGV[0] and use it to name a new file, edit.$ARGV[0]
#!/usr/bin/perl
use warnings;
use strict;
my $file = $ARGV[0];
open my $input, '<', $file or die $!;
my $editedfile = "edit.$file";
open my $name_change, '>', $editedfile or die $!;
if ($input eq "md5file"){
while ($in){
# Do something...
print $name_change "$_\n";
}
}
Perhaps the following will be helpful:
use strict;
use warnings;
while (<>) {
my $md5line = `md5sum $_`;
my #md5arr = split( " ", $md5line );
printf( "%s\t%s\n", $md5arr[1], $md5arr[0] );
}
Usage: perl mydm5.pl rootfile [>md5file]
The last, optional parameter will direct output to the file md5file; if absent, the results are printed to the console.

How to search and replace using hash with Perl

I'm new to Perl and I'm afraid I am stuck and wanted to ask if someone might be able to help me.
I have a file with two columns (tab separated) of oldname and newname.
I would like to use the oldname as key and newname as value and store it as a hash.
Then I would like to open a different file (gff file) and replace all the oldnames in there with the newnames and write it to another file.
I have given it my best try but am getting a lot of errors.
If you could let me know what I am doing wrong, I would greatly appreciate it.
Here are how the two files look:
oldname newname(SFXXXX) file:
genemark-scaffold00013-abinit-gene-0.18 SF130001
augustus-scaffold00013-abinit-gene-1.24 SF130002
genemark-scaffold00013-abinit-gene-1.65 SF130003
file to search and replace in (an example of one of the lines):
scaffold00013 maker gene 258253 258759 . - . ID=maker-scaffold00013-augustus-gene-2.187;Name=maker-scaffold00013-augustus-gene-2.187;
Here is my attempt:
#!/usr/local/bin/perl
use warnings;
use strict;
my $hashfile = $ARGV[0];
my $gfffile = $ARGV[1];
my %names;
my $oldname;
my $newname;
if (!defined $hashfile) {
die "Usage: $0 hash_file gff_file\n";
}
if (!defined $gfffile) {
die "Usage: $0 hash_file gff_file\n";
}
###save hashfile with two columns, oldname and newname, into a hash with oldname as key and newname as value.
open(HFILE, $hashfile) or die "Cannot open $hashfile\n";
while (my $line = <HFILE>) {
chomp($line);
my ($oldname, $newname) = split /\t/;
$names{$oldname} = $newname;
}
close HFILE;
###open gff file and replace all oldnames with newnames from %names.
open(GFILE, $gfffile) or die "Cannot open $gfffile\n";
while (my $line2 = <GFILE>) {
chomp($line2);
eval "$line2 =~ s/$oldname/$names{oldname}/g";
open(OUT, ">SFrenamed.gff") or die "Cannot open SFrenamed.gff: $!";
print OUT "$line2\n";
close OUT;
}
close GFILE;
Thank you!
Your main problem is that you aren't splitting the $line variable. split /\t/ splits $_ by default, and you haven't put anything in there.
This program builds the hash, and then constructs a regex from all the keys by sorting them in descending order of length and joining them with the | regex alternation operator. The sorting is necessary so that the longest of all possible choices is selected if there are any alternatives.
Every occurrence of the regex is replaced by the corresponding new name in each line of the input file, and the output written to the new file.
use strict;
use warnings;
die "Usage: $0 hash_file gff_file\n" if #ARGV < 2;
my ($hashfile, $gfffile) = #ARGV;
open(my $hfile, '<', $hashfile) or die "Cannot open $hashfile: $!";
my %names;
while (my $line = <$hfile>) {
chomp($line);
my ($oldname, $newname) = split /\t/, $line;
$names{$oldname} = $newname;
}
close $hfile;
my $regex = join '|', sort { length $b <=> length $a } keys %names;
$regex = qr/$regex/;
open(my $gfile, '<', $gfffile) or die "Cannot open $gfffile: $!";
open(my $out, '>', 'SFrenamed.gff') or die "Cannot open SFrenamed.gff: $!";
while (my $line = <$gfile>) {
chomp($line);
$line =~ s/($regex)/$names{$1}/g;
print $out $line, "\n";
}
close $out;
close $gfile;
Why are you using an eval? And $oldname is going to be undefined in the second while loop, because the first while loop you redeclare them in that scope (even if you used the outer scope, it would store the very last value that you processed, which wouldn't be helpful).
Take out the my $oldname and my $newname at the top of your script, it is useless.
Take out the entire eval line. You need to repeat the regex for each thing you want to replace. Try something like:
$line2 =~ s/$_/$names{$_}/g for keys %names;
Also see Borodin's answer. He made one big regex instead of a loop, and caught your lack of the second argument to split.

help merging perl code routines together for file processing

I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below.
Also, any best practices would be great too as I'm learning this language. Thanks for your help.
Here's the process flow I am looking for:
read a directory
look for a particular file
use the file name to strip out some key information to create a newly processed file
process the input file
create the newly processed file for each input file read (if i read in 10, I create 10 new files)
Part 1:
my $target_dir = "/backups/test/";
opendir my $dh, $target_dir or die "can't opendir $target_dir: $!";
while (defined(my $file = readdir($dh))) {
next if ($file =~ /^\.+$/);
#Get filename attributes
if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) {
print "$1\n";
print "$2\n";
print "$3\n";
}
print "$file\n";
}
Part 2:
use strict;
use Digest::MD5 qw(md5_hex);
#Create new file
open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file";
my $data = '';
my $line1 = <>;
chomp $line1;
my #heading = split /,/, $line1;
my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D");
while (<>)
{
my $digest = md5_hex($data);
chomp;
my (#values) = split /,/;
my $extra = "__mykey__$sep1$digest$sep2" ;
$extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(#values));
$data .= "$extra$eorec";
print NEWFILE "$data";
}
#print $data;
close (NEWFILE);
You are using an old-style of Perl programming. I recommend you to use functions and CPAN modules (http://search.cpan.org). Perl pseudocode:
use Modern::Perl;
# use...
sub get_input_files {
# return an array of files (#)
}
sub extract_file_info {
# takes the file name and returs an array of values (filename attrs)
}
sub process_file {
# reads the input file, takes the previous attribs and build the output file
}
my #ifiles = get_input_files;
foreach my $ifile(#ifiles) {
my #attrs = extract_file_info($ifile);
process_file($ifile, #attrs);
}
Hope it helps
I've bashed your two code fragments together (making the second a sub that the first calls for each matching file) and, if I understood your description of the objective correctly, this should do what you want. Comments on style and syntax are inline:
#!/usr/bin/env perl
# - Never forget these!
use strict;
use warnings;
use Digest::MD5 qw(md5_hex);
my $target_dir = "/backups/test/";
opendir my $dh, $target_dir or die "can't opendir $target_dir: $!";
while (defined(my $file = readdir($dh))) {
# Parens on postfix "if" are optional; I prefer to omit them
next if $file =~ /^\.+$/;
if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) {
process_file($file, $1, $2, $3);
}
print "$file\n";
}
sub process_file {
my ($orig_name, $foo_x, $name_x, $p_x) = #_;
my $new_name = "/backups/processed/foo$foo_x.name.$name_x-foo_p$p_x.out";
# - From your description of the task, it sounds like we actually want to
# read from the found file, not from <>, so opening it here to read
# - Better to use lexical ("my") filehandle and three-arg form of open
# - "or" has lower operator precedence than "||", so less chance of
# things being grouped in the wrong order (though either works here)
# - Including $! in the error will tell why the file open failed
open my $in_fh, '<', $orig_name or die "cannot read $orig_name: $!";
open(my $out_fh, '>', $new_name) or die "cannot create $new_name: $!";
my $data = '';
my $line1 = <$in_fh>;
chomp $line1;
my #heading = split /,/, $line1;
my ($sep1, $sep2, $eorec) = ("^A", "^E", "^D");
while (<$in_fh>) {
chomp;
my $digest = md5_hex($data);
my (#values) = split /,/;
my $extra = "__mykey__$sep1$digest$sep2";
$extra .= "$heading[$_]$sep1$values[$_]$sep2"
for (0 .. scalar(#values));
# - Useless use of double quotes removed on next two lines
$data .= $extra . $eorec;
#print $out_fh $data;
}
# - Moved print to output file to here (where it will print the complete
# output all at once) rather than within the loop (where it will print
# all previous lines each time a new line is read in) to prevent
# duplicate output records. This could also be achieved by printing
# $extra inside the loop. Printing $data at the end will be slightly
# faster, but requires more memory; printing $extra within the loop and
# getting rid of $data entirely would require less memory, so that may
# be the better option if you find yourself needing to read huge input
# files.
print $out_fh $data;
# - $in_fh and $out_fh will be closed automatically when it goes out of
# scope at the end of the block/sub, so there's no real point to
# explicitly closing it unless you're going to check whether the close
# succeeded or failed (which can happen in odd cases usually involving
# full or failing disks when writing; I'm not aware of any way that
# closing a file open for reading can fail, so that's just being left
# implicit)
close $out_fh or die "Failed to close file: $!";
}
Disclaimer: perl -c reports that this code is syntactically valid, but it is otherwise untested.