I am working on a perl script that successfully generates output files containing hashes. I want to use those hashes in my file. Is it possible to include a file that is generated in that file or will I have to create another file?
Technically, it might be cleaner to start a new .pl file that uses those hashes, but I would like to keep everything in a single script if possible. Is it even possible to do so?
Edit: I'm just unsure if I can "circle" it back around so I can use those hashes in my file because the hashes are generated on a weekly basis. I don't want my file to mistakenly reach out for last week's hashes instead of the newly generated ones. I have not yet wrote my script in a manner to classify each week's generated hashes.
In summary, here is what my file does. It extracts a table from another file. removes columns and rows that are not needed. Once left with the only two columns needed, it takes them and puts them into a hash. One column being the key and the other being the value. For this reason, I've found Data::Dumper to be the best option for my hashes. I'm intermediate in Perl and this is a script I'm putting together for an internship.
Here is an example how you can save a hash as JSON to a file and later read back the JSON to a perl hash. This example is using JSON::XS:
use strict;
use warnings;
use Data::Dumper;
use JSON::XS;
{
my %h = (a => 1, b => 2);
my $str = encode_json( \%h );
my $fn = 'test.json';
save_json( $fn, \%h );
my $h2 = read_json( $fn );
print Dumper( $h2 );
}
sub read_json {
my ( $fn ) = #_;
open ( my $fh, '<', $fn ) or die "Could not open file '$fn': $!";
my $str = do { local $/; <$fh> };
close $fh;
my $h = decode_json $str;
return $h;
}
sub save_json {
my ( $fn, $hash ) = #_;
my $str = encode_json( $hash );
open ( my $fh, '>', $fn ) or die "Could not open file '$fn': $!";
print $fh $str;
close $fh;
}
Output:
$VAR1 = {
'a' => 1,
'b' => 2
};
Some alternatives to JSON are YAML and Storable.
Related
I have the following list in a CSV file, and my goal is to split this list into directories named YYYY-Month based on the date in each row.
NAME99;2018/06/13;12:27:30
NAME01;2018/06/13;13:03:59
NAME00;2018/06/15;11:33:01
NAME98;2018/06/15;12:22:00
NAME34;2018/06/15;16:58:45
NAME17;2018/06/18;15:51:10
NAME72;2018/06/19;10:06:37
NAME70;2018/06/19;12:44:03
NAME77;2018/06/19;16:36:55
NAME25;2018/06/11;16:32:57
NAME24;2018/06/11;16:32:57
NAME23;2018/06/11;16:37:15
NAME01;2018/06/11;16:37:15
NAME02;2018/06/11;16:37:15
NAME01;2018/06/11;16:37:18
NAME02;2018/06/05;09:51:17
NAME00;2018/06/13;15:04:29
NAME07;2018/06/19;10:02:26
NAME08;2018/06/26;16:03:57
NAME09;2018/06/26;16:03:57
NAME02;2018/06/27;16:58:12
NAME03;2018/07/03;07:47:21
NAME21;2018/07/03;10:53:00
NAMEXX;2018/07/05;03:13:01
NAME21;2018/07/05;15:39:00
NAME01;2018/07/05;16:00:14
NAME00;2018/07/08;11:50:10
NAME07;2018/07/09;14:46:00
What is the smartest method to achieve this result without having to create a list of static routes, in which to carry out the append?
Currently my program writes this list to a directory called YYYY-Month only on the basis of localtime but does not do anything on each line.
Perl
#!/usr/bin/perl
use strict;
use warnings 'all';
use feature qw(say);
use File::Path qw<mkpath>;
use File::Spec;
use File::Copy;
use POSIX qw<strftime>;
my $OUTPUT_FILE = 'output.csv';
my $OUTFILE = 'splitted_output.csv';
# Output to file
open( GL_INPUT, $OUTPUT_FILE ) or die $!;
$/ = "\n\n"; # input record separator
while ( <GL_INPUT> ) {
chomp;
my #lines = split /\n/;
my $i = 0;
foreach my $lines ( #lines ) {
# Encapsulate Date/Time
my ( $name, $y, $m, $d, $time ) =
$lines[$i] =~ /\A(\w+);(\d+)\/(\d+)\/(\d+);(\d+:\d+:\d+)/;
# Generate Directory YYYY-Month - #2009-January
my $dir = File::Spec->catfile( $BASE_LOG_DIRECTORY, "$y-$m" ) ;
unless ( -e $dir ) {
mkpath $dir;
}
my $log_file_path = File::Spec->catfile( $dir, $OUTFILE );
open( OUTPUT, '>>', $log_file_path ) or die $!;
# Here I append value into files
print OUTPUT join ';', "$y/$m/$d", $time, "$name\n";
$i++;
}
}
close( GL_INPUT );
close( OUTPUT );
There is no reason to care about the actual date, or to use date functions at all here. You want to split up your data based on a partial value of one of the columns in the data. That just happens to be the date.
NAME08;2018/06/26;16:03:57 # This goes to 2018-06/
NAME09;2018/06/26;16:03:57 #
NAME02;2018/06/27;16:58:12 #
NAME03;2018/07/03;07:47:21 # This goes to 2018-07/
NAME21;2018/07/03;10:53:00 #
NAMEXX;2018/07/05;03:13:01 #
NAME21;2018/07/05;15:39:00 #
The easiest way to do this is to iterate your input data, then stick it into a hash with keys for each year-month combination. But you're talking about log files, and they might be large, so that's inefficient.
We should work with different file handles instead.
use strict;
use warnings;
my %months = ( 6 => 'June', 7 => 'July' );
my %handles;
while (my $row = <DATA>) {
# no chomp, we don't actually care about reading the whole row
my (undef, $dir) = split /;/, $row; # discard name and everything after date
# create the YYYY-MM key
$dir =~ s[^(....)/(..)][$1-$months{$2}];
# open a new handle for this year/month if we don't have it yet
unless (exists $handles{$dir}) {
# create the directory (skipped here) ...
open my $fh, '>', "$dir/filename.csv" or die $!;
$handles{$dir} = $fh;
}
# write out the line to the correct directory
print { $handles{$dir} } $row;
}
__DATA__
NAME08;2018/06/26;16:03:57
NAME09;2018/06/26;16:03:57
NAME02;2018/06/27;16:58:12
NAME03;2018/07/03;07:47:21
NAME21;2018/07/03;10:53:00
NAMEXX;2018/07/05;03:13:01
NAME21;2018/07/05;15:39:00
I've skipped the part about creating the directory as you already know how to do this.
This code will also work if your rows of data are not sequential. It's not the most efficient as the number of handles will grow the more data you have, but as long you don't have 100s of them at the same time that does not really matter.
Things of note:
You don't need chomp because you don't care about working with the last field.
You don't need to assign all of the values after split because you don't care about them.
You can discard values by assigning them to undef.
Always use three-argument open and lexical file handles.
the {} in print { ... } $roware needed to tell Perl that this is the handle we are printing too. See http://perldoc.perl.org/functions/print.html.
How can I make use of a record separator, and then simultaneously use a sub-record separator? Perhaps that isn't the best way to think about what I am trying to do. Here is my goal:
I want to perform a while loop on a single tab delimitated item at a time, in a specified row of items. For every line (row) of tab separated items, I need to print the outcomes of all the while loops into a unique file. Allow the following examples to help clarify.
My input file will be something like the following. It will be called "Clustered_Barcodes.txt"
TTTATGC TTTATGG TTTATCC TTTATCG
TTTATAA TTTATAA TTTATAT TTTATAT TTTATTA
CTTGTAA
My perl code looks like the following:
#!/usr/bin/perl
use warnings;
use strict;
open(INFILE, "<", "Clustered_Barcodes.txt") or die $!;
my %hash = (
"TTTATGC" => "TATAGCGCTTTATGCTAGCTAGC",
"TTTATGG" => "TAGCTAGCTTTATGGGCTAGCTA",
"TTTATCC" => "GCTAGCTATTTATCCGCTAGCTA",
"TTTATCG" => "TAGCTAGCTTTATCGCGTACGTA",
"TTTATAA" => "TAGCTAGCTTTATAATAGCTAGC",
"TTTATAA" => "ATCGATCGTTTATAACGATCGAT",
"TTTATAT" => "TCGATCGATTTATATTAGCTAGC",
"TTTATAT" => "TAGCTAGCTTTATATGCTAGCTA",
"TTTATTA" => "GCTAGCTATTTATTATAGCTAGC",
"CTTGTAA" => "ATCGATCGCTTGTAACGATTAGC",
);
while(<INFILE>) {
$/ = "\n";
my #lines = <INFILE>;
open my $out, '>', "Clustered_Barcode_$..fasta" or die $!;
foreach my $sequence (#lines){
if (exists $hash{$sequence}){
print $out ">$sequence\n$hash{$sequence}\n";
}
}
}
My desired output would be three different files.
The first file will be called "Clustered_Barcode_1.fasta" and will look like:
>TTTATGC
TATAGCGCTTTATGCTAGCTAGC
>TTTATGG
TAGCTAGCTTTATGGGCTAGCTA
>TTTATCC
GCTAGCTATTTATCCGCTAGCTA
>TTTATCG
TAGCTAGCTTTATCGCGTACGTA
Note that this is formatted so that the keys are preceded by a carrot, and then on the next line is the longer associated sequence (value). This file includes all the sequences in the first line of Clustered_Barcodes.txt
My third file should be named "Clustered_Barcode_3.fasta" and look like the following:
>CTTGTAA
ATCGATCGCTTGTAACGATTAGC
When I run my code, it only takes the second and third lines of sequences in the input file. How can I start with the first line (by getting rid of the \n requirement for a record separator)? How can I then process each item at a time and then print the line's worth of results into one file? Also, if there is a way to incorporate the number of sequences into the file name, that would be great. It would help me to later organize the files by size. For example, the name could be something like "Clusterd_Barcodes_1_File_3_Sequences.fasta".
Thank you all.
OK, so here's one way to do it:
#!/usr/bin/perl
use strict;
use warnings;
Standard preamble.
my %hash = (
"TTTATGC" => "TATAGCGCTTTATGCTAGCTAGC",
"TTTATGG" => "TAGCTAGCTTTATGGGCTAGCTA",
"TTTATCC" => "GCTAGCTATTTATCCGCTAGCTA",
"TTTATCG" => "TAGCTAGCTTTATCGCGTACGTA",
"TTTATAA" => "TAGCTAGCTTTATAATAGCTAGC",
"TTTATAA" => "ATCGATCGTTTATAACGATCGAT",
"TTTATAT" => "TCGATCGATTTATATTAGCTAGC",
"TTTATAT" => "TAGCTAGCTTTATATGCTAGCTA",
"TTTATTA" => "GCTAGCTATTTATTATAGCTAGC",
"CTTGTAA" => "ATCGATCGCTTGTAACGATTAGC",
);
Set up the hash of sequences.
my $infile = 'Clustered_Barcodes.txt';
open my $infh, '<', $infile or die "$0: $infile: $!\n";
Open file for reading.
chomp(my #rows = readline $infh);
my $row_count = #rows;
Slurp all lines into memory in order to get the number of sequences. If you have too many sequences, this approach is not going to work (because you'll run out of memory (but that depends on how much RAM you have)).
my $i = 1;
for my $row (#rows) {
Loop over the lines.
my #fields = split /\t/, $row;
Split each line into fields separated by tabs.
my $outfile = "Clustered_Barcodes_${i}_File_${row_count}_Sequences.fasta";
$i++;
open my $outfh, '>', $outfile or die "$0: $outfile: $!\n";
Open current output file and increment counter.
for my $field (#fields) {
print $outfh ">$field\n$hash{$field}\n" if exists $hash{$field};
}
Write each field (and its mapping) to outfile.
}
And we're done. The main difference to your original code is using split /\t/ and foreach to loop over fields within a line.
We can do it without slurping, too:
while (my $row = readline $infh) {
chomp $row;
Loop over the lines, one by one. This replaces the 4 lines from chomp(my #rows = readline $infh); to for my $row (#rows) {.
But now we've lost the $i and $row_count variables, so we have to change the initialization of $outfile:
my $outfile = "Clustered_Barcodes_$..fasta";
That should be all the changes you need. (You can get $row_count back in this scenario by reading $infh twice (the first time just for counting, then seeking back to the start); this is left as an exercise for the reader.)
There's no need to read in the whole file that I see here. You just need to loop over the contents of each line:
while(my $line = <INFILE>) {
chomp $line;
open my $out, '>', "Clustered_Barcode_$..fasta" or die $!;
foreach my $sequence ( split /\t/, $line ){
if (exists $hash{$sequence}){
print $out ">$sequence\n$hash{$sequence}\n";
}
}
}
I have two CSV files. The first is a list file, it contains the ID and names. For example
1127100,Acanthocolla cruciata
1127103,Acanthocyrta haeckeli
1127108,Acanthometra fusca
The second is what I want to exchange and extract the line by the first number if a match is found. The first column of numbers correspond in each file. For example
1127108,1,0.60042
1127103,1,0.819671
1127100,2,0.50421,0.527007
10207,3,0.530422,0.624466
So I want to end up with CSV file like this
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
I tried Perl but opening two files at once proved messy. So I tried converting one of the CSV files to a string and parse it that way, but didnt work. But then I was reading about grep and other one-liners but I am not familiar with it. Would this be possible with grep?
This is the Perl code I tried
use strict;
use warnings;
open my $csv_score, '<', "$ARGV[0]" or die qq{Failed to open "$ARGV[0]" for input: $!\n};
open my $csv_list, '<', "$ARGV[1]" or die qq{Failed to open "$ARGV[1]" for input: $!\n};
open my $out, ">$ARGV[0]_final.txt" or die qq{Failed to open for output: $!\n};
my $string = <$csv_score>;
while ( <$csv_list> ) {
my ($find, $replace) = split /,/;
$string =~ s/$find/$replace/g;
if ($string =~ m/^$replace/){
print $out $string;
}
}
close $csv_score;
close $csv_list;
close $out;
The general purpose text processing tool that comes with all UNIX installations is named awk:
$ awk -F, -v OFS=, 'NR==FNR{m[$1]=$2;next} $1=m[$1]' file1 file2
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
Your code was failing because you only read the first line from the $csv_score file, and you tried to print $string every time it is changed. You also failed to remove the newline from the end of the lines from your $csv_list file. If you fix those things then it looks like this
use strict;
use warnings;
open my $csv_score, '<', "$ARGV[0]" or die qq{Failed to open "$ARGV[0]" for input: $!\n};
open my $csv_list, '<', "$ARGV[1]" or die qq{Failed to open "$ARGV[1]" for input: $!\n};
open my $out, ">$ARGV[0]_final.txt" or die qq{Failed to open for output: $!\n};
my $string = do {
local $/;
<$csv_score>;
};
while ( <$csv_list> ) {
chomp;
my ( $find, $replace ) = split /,/;
$string =~ s/$find/$replace/g;
}
print $out $string;
close $csv_score;
close $csv_list;
close $out;
output
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
10207,3,0.530422,0.624466
However that's not a safe way of doing things, as IDs may be found elsewhere than at the start of a line
I would build a hash out of the $csv_list file like this, which also makes the program more concise
use strict;
use warnings;
use v5.10.1;
use autodie;
my %ids;
{
open my $fh, '<', $ARGV[1];
while ( <$fh> ) {
chomp;
my ($id, $name) = split /,/;
$ids{$id} = $name;
}
}
open my $in_fh, '<', $ARGV[0];
open my $out_fh, '>', "$ARGV[0]_final.txt";
while ( <$in_fh> ) {
s{^(\d+)}{$ids{$1} // $1}e;
print $out_fh $_;
}
The output is identical to that of the first program above
The problem with the code as written is that you only do this once:
my $string = <$csv_score>;
This reads one line from $csv_score and you don't ever use the rest.
I would suggest that you need to:
Read the first file into a hash
Iterate the second file, and do a replace on the first column.
using Text::CSV is generally a good idea for processing it, but it doesn't seem to be necessary for your example.
So:
#!/usr/bin/env perl
use strict;
use warnings;
use Text::CSV;
use Data::Dumper;
my $csv = Text::CSV->new( { binary => 1 } );
my %replace;
while ( my $row = $csv->getline( \*DATA ) ) {
last if $row->[0] =~ m/NEXT/;
$replace{ $row->[0] } = $row->[1];
}
print Dumper \%replace;
my $search = join( "|", map {quotemeta} keys %replace );
$search =~ qr/($search)/;
while ( my $row = $csv->getline( \*DATA ) ) {
$row->[0] =~ s/^($search)$/$replace{$1}/;
$csv->print( \*STDOUT, $row );
print "\n";
}
__DATA__
1127100,Acanthocolla cruciata
1127103,Acanthocyrta haeckeli
1127108,Acanthometra fusca
NEXT
1127108,1,0.60042
1127103,1,0.819671
1127100,2,0.50421,0.527007
10207,3,0.530422,0.624466
Note - this still prints that last line of your source content:
"Acanthometra fusca ",1,"0.60042 "
"Acanthocyrta haeckeli ",1,"0.819671 "
"Acanthocolla cruciata ",2,0.50421,"0.527007 "
(Your data contained whitespace, so Text::CSV wraps it in quotes)
If you want to discard that, then you could test if the replace actually occurred:
if ( $row->[0] =~ s/^($search)$/$replace{$1}/ ) {
$csv->print( \*STDOUT, $row );
print "\n";
}
(And you can of course, keep on using split /,/ if you're sure you won't have any of the whacky things that CSV supports normally).
I would like to provide a very different approach.
Let's say you are way more comfortable with databases than with Perl's data structures. You can use DBD::CSV to turn your CSV files into kind of relational databases. It uses Text::CSV under the hood (hat tip to #Sobrique). You will need to install it from CPAN as it's not bundled in the default DBI distribution though.
use strict;
use warnings;
use Data::Printer; # for p
use DBI;
my $dbh = DBI->connect( "dbi:CSV:", undef, undef, { f_ext => '.csv' } );
$dbh->{csv_tables}->{names} = { col_names => [qw/id name/] };
$dbh->{csv_tables}->{numbers} = { col_names => [qw/id int float/] };
my $sth_select = $dbh->prepare(<<'SQL');
SELECT names.name, numbers.int, numbers.float
FROM names
JOIN numbers ON names.id = numbers.id
SQL
# column types will be silently discarded
$dbh->do('CREATE TABLE result ( name CHAR(255), int INTEGER, float INTEGER )');
my $sth_insert =
$dbh->prepare('INSERT INTO result ( name, int, float ) VALUES ( ?, ?, ? ) ');
$sth_select->execute;
while (my #res = $sth_select->fetchrow_array ) {
p #res;
$sth_insert->execute(#res);
}
What this does is set up column names for the two tables (your CSV files) as those do not have a first row with names. I made the names up based on the data types. It will then create a new table (CSV file) named result and fill it by writing one row at a time.
At the same time it will output data (for debugging purposes) to STDERR through Data::Printer.
[
[0] "Acanthocolla cruciata",
[1] 2,
[2] 0.50421
]
[
[0] "Acanthocyrta haeckeli",
[1] 1,
[2] 0.819671
]
[
[0] "Acanthometra fusca",
[1] 1,
[2] 0.60042
]
The resulting file looks like this:
$ cat scratch/result.csv
name,int,float
"Acanthocolla cruciata",2,0.50421
"Acanthocyrta haeckeli",1,0.819671
"Acanthometra fusca",1,0.60042
I am a beginner with Perl and I want to merge the content of two text files.
I have read some similar questions and answers on this forum, but I still cannot resolve my issues
The first file has the original ID and the recoded ID of each individual (in the first and fourth columns)
The second file has the recoded ID and some information on some of the individuals (in the first and second columns).
I want to create an output file with the original, recoded and information of these individuals.
This is the perl script I have created so far, which is not working.
If anyone could help it would be very much appreciated.
use warnings;
use strict;
use diagnostics;
use vars qw( #fields1 $recoded $original $IDF #fields2);
my %columns1;
open (FILE1, "<file1.txt") || die "$!\n Couldn't open file1.txt\n";
while ($_ = <FILE1>)
{
chomp;
#fields1=split /\s+/, $_;
my $recoded = $fields1[0];
my $original = $fields1[3];
my %columns1 = (
$recoded => $original
);
};
open (FILE2, "<file2.txt") || die "$!\n Couldnt open file2.txt \n";
for ($_ = <FILE2>)
{
chomp;
#fields2=split /\s+/, $_;
my $IDF= $fields2[0];
my $F=$fields2[1];
my %columns2 = (
$F => $IDF
);
};
close FILE1;
close FILE2;
open (FILE3, ">output.txt") ||die "output problem\n";
for (keys %columns1) {
if (exists ($columns2{$_}){
print FILE3 "$_ $columns1{$_}\n"
};
}
close FILE3;
One problem is with scoping. In your first loop, you have a my in front of $column1 which makes it local to the loop and will not be in scope when you next the loop. So the %columns1 (which is outside of the loop) does not have any values set (which is what I suspect you want to set). For the assignment, it would seem to be easier to have $columns1{$recorded} = $original; which assigns the value to the key for the hash.
In the second loop you need to declare %columns2 outside of the loop and possibly use the above assignment.
For the third loop, in the print you just need add $columns2{$_} in front part of the string to be printed to get the original ID to be printed before the recorded ID.
Scope:
The problem is with scope of the hash variables you have defined. The scope of the variable is limited to the loop inside which the variable has been defined.
In your code, since %columns1 and %columns2 are used outside the while loops. Hence, they should be defined outside the loops.
Compilation error : braces not closed properly
Also, in the "if exists" part, the open-and-closed braces symmetry is affected.
Here is your code with the required corrections made:
use warnings;
use strict;
use diagnostics;
use vars qw( #fields1 $recoded $original $IDF #fields2);
my (%columns1, %columns2);
open (FILE1, "<file1.txt") || die "$!\n Couldn't open CFC_recoded.txt\n";
while ($_ = <FILE1>)
{
chomp;
#fields1=split /\s+/, $_;
my $recoded = $fields1[0];
my $original = $fields1[3];
%columns1 = (
$recoded => $original
);
}
open (FILE2, "<file2.txt") || die "$!\n Couldnt open CFC_F.xlsx \n";
for ($_ = <FILE2>)
{
chomp;
#fields2=split /\s+/, $_;
my $IDF= $fields2[0];
my $F=$fields2[1];
%columns2 = (
$F => $IDF
);
}
close FILE1;
close FILE2;
open (FILE3, ">output.txt") ||die "output problem\n";
for (keys %columns1) {
print FILE3 "$_ $columns1{$_} \n" if exists $columns2{$_};
}
close FILE3;
I have a small section of code I'm trying to modify.
What I'm trying to do is have the filename inputted into the third column. At the moment I almost have it working, but I'd like to remove the ".csv"s from the end of each entry in the column. I'd also like to give the column the heading "filename".
I hope the difference between "table1" and "table2" shown above summarises quite well the modification which I'm trying to make here.
The code I'm currently using to create "table1" is the following:
#!/usr/bin/perl
use warnings;
use strict;
open M,"<mapcodelist.txt" or die "mapcodelist.txt $!";
my %m;
while( <M> ){
my($k,$v)=split;
$v=~s/\./_/g;
$m{$k}=$v;
}
close M;
chdir "C:/Users/Stephen/Desktop/Database_Design/" or die $!;
#ARGV=<*.csv>;
$^I=".bak";
while( <> ){
chomp;
$\=/^mass/?",filename$/": ",$ARGV$/";
print;
}
for( <*.csv> ){
my $r;
($r=$_) =~ s/\w+_(\w+)(?=\.csv)/$1_$m{$1}/;
rename $_,$r or warn " rename $_,$r $!";
}
Any advice with this would be very much appreciated.
Thanks.
You can try following perl script:
#!/usr/bin/env perl;
use strict;
use warnings;
use Text::CSV_XS;
my ($prev_lc);
open my $fh, '<', shift or die;
my $csv = Text::CSV_XS->new({ eol => "\n" }) or die;
while ( my $row = $csv->getline($fh) ) {
if ( $csv->record_number == 1 ) {
$prev_lc = $row->[$#$row];
$csv->print( \*STDOUT, [ #$row[0 .. $#$row - 1], 'Filename' ] );
next;
}
$prev_lc =~ s/\.csv$//;
$csv->print( \*STDOUT, [ #$row[0 .. $#$row - 1], $prev_lc ] );
## Previous last column.
$prev_lc = $row->[$#$row];
}
It uses an auxiliar variable to add the missing header and process each whole data line at the same time. I simply use a regular expression to remove the extension.
With following dummy test data (infile) and assuming that last line doesn't have a file name because of the header:
mass,intensity,20130730_p12_A2.csv
2349.345,56.23423,20130730_p12_A2.csv
744.2884,5.01
Run the script like:
perl script.pl infile
That yields:
mass,intensity,Filename
2349.345,56.23423,20130730_p12_A2
744.2884,20130730_p12_A2
Perhaps it's not perfect based in particular data that you didn't show, and I didn't take into account all that code that you posted where you handle many files. But you can see that it works in the way you asked it and it's left as work for you to adapt it to your needs, if neccesary.