I have a small section of code I'm trying to modify.
What I'm trying to do is have the filename inputted into the third column. At the moment I almost have it working, but I'd like to remove the ".csv"s from the end of each entry in the column. I'd also like to give the column the heading "filename".
I hope the difference between "table1" and "table2" shown above summarises quite well the modification which I'm trying to make here.
The code I'm currently using to create "table1" is the following:
#!/usr/bin/perl
use warnings;
use strict;
open M,"<mapcodelist.txt" or die "mapcodelist.txt $!";
my %m;
while( <M> ){
my($k,$v)=split;
$v=~s/\./_/g;
$m{$k}=$v;
}
close M;
chdir "C:/Users/Stephen/Desktop/Database_Design/" or die $!;
#ARGV=<*.csv>;
$^I=".bak";
while( <> ){
chomp;
$\=/^mass/?",filename$/": ",$ARGV$/";
print;
}
for( <*.csv> ){
my $r;
($r=$_) =~ s/\w+_(\w+)(?=\.csv)/$1_$m{$1}/;
rename $_,$r or warn " rename $_,$r $!";
}
Any advice with this would be very much appreciated.
Thanks.
You can try following perl script:
#!/usr/bin/env perl;
use strict;
use warnings;
use Text::CSV_XS;
my ($prev_lc);
open my $fh, '<', shift or die;
my $csv = Text::CSV_XS->new({ eol => "\n" }) or die;
while ( my $row = $csv->getline($fh) ) {
if ( $csv->record_number == 1 ) {
$prev_lc = $row->[$#$row];
$csv->print( \*STDOUT, [ #$row[0 .. $#$row - 1], 'Filename' ] );
next;
}
$prev_lc =~ s/\.csv$//;
$csv->print( \*STDOUT, [ #$row[0 .. $#$row - 1], $prev_lc ] );
## Previous last column.
$prev_lc = $row->[$#$row];
}
It uses an auxiliar variable to add the missing header and process each whole data line at the same time. I simply use a regular expression to remove the extension.
With following dummy test data (infile) and assuming that last line doesn't have a file name because of the header:
mass,intensity,20130730_p12_A2.csv
2349.345,56.23423,20130730_p12_A2.csv
744.2884,5.01
Run the script like:
perl script.pl infile
That yields:
mass,intensity,Filename
2349.345,56.23423,20130730_p12_A2
744.2884,20130730_p12_A2
Perhaps it's not perfect based in particular data that you didn't show, and I didn't take into account all that code that you posted where you handle many files. But you can see that it works in the way you asked it and it's left as work for you to adapt it to your needs, if neccesary.
Related
I am working on a perl script that successfully generates output files containing hashes. I want to use those hashes in my file. Is it possible to include a file that is generated in that file or will I have to create another file?
Technically, it might be cleaner to start a new .pl file that uses those hashes, but I would like to keep everything in a single script if possible. Is it even possible to do so?
Edit: I'm just unsure if I can "circle" it back around so I can use those hashes in my file because the hashes are generated on a weekly basis. I don't want my file to mistakenly reach out for last week's hashes instead of the newly generated ones. I have not yet wrote my script in a manner to classify each week's generated hashes.
In summary, here is what my file does. It extracts a table from another file. removes columns and rows that are not needed. Once left with the only two columns needed, it takes them and puts them into a hash. One column being the key and the other being the value. For this reason, I've found Data::Dumper to be the best option for my hashes. I'm intermediate in Perl and this is a script I'm putting together for an internship.
Here is an example how you can save a hash as JSON to a file and later read back the JSON to a perl hash. This example is using JSON::XS:
use strict;
use warnings;
use Data::Dumper;
use JSON::XS;
{
my %h = (a => 1, b => 2);
my $str = encode_json( \%h );
my $fn = 'test.json';
save_json( $fn, \%h );
my $h2 = read_json( $fn );
print Dumper( $h2 );
}
sub read_json {
my ( $fn ) = #_;
open ( my $fh, '<', $fn ) or die "Could not open file '$fn': $!";
my $str = do { local $/; <$fh> };
close $fh;
my $h = decode_json $str;
return $h;
}
sub save_json {
my ( $fn, $hash ) = #_;
my $str = encode_json( $hash );
open ( my $fh, '>', $fn ) or die "Could not open file '$fn': $!";
print $fh $str;
close $fh;
}
Output:
$VAR1 = {
'a' => 1,
'b' => 2
};
Some alternatives to JSON are YAML and Storable.
I have files with several columns that contain text in the following format “number/number:zero,number_of_interest”. Example: “1/1:0,13”.
I need to remove rows if the number of interest is less than 20 in any of the columns.
I prefer to use egrep and not to read in the file but not sure how to evaluate the number of interest in each column in a single statement.
I’m also removing rows that contain SVLEN=-1 or SVLEN=-2 and it seems to be working well with egrep:
$cmd2 = `egrep -v 'SVLEN=-1;|SVLEN=-2;' $my_vcf > $my_new_vcf`; print $cmd1;
I've tried the following but it did not work:
my $cmd2 = `egrep -v 'SVLEN=-1;|SVLEN=-2;|(\,(\d+) < 20)' $my_vcf > $my_new_vcf`; print $cmd2;
Thank you.
egrep is the wrong tool for this purpose as it can't do math within its regular expression.
Because you already have a Perl script you are better off using Perl commands to achieve your goal.
Unfortunately you have to open and read the file line by line to do so, but that is exactly what egrep does. How else could it judge the lines?
#!/usr/bin/env perl
use strict;
use warnings;
use List::Util;
sub filter_lines
{
my $in_filename = shift;
my $out_filename = shift;
open( my $fhin, '<', $in_filename ) or die "cannot open $in_filename: $!\n";
open( my $fhout, '>', $out_filename ) or die "cannot open $out_filename: $!\n";
while ( my $line = <$fhin> ) {
next if ( $line =~ /SVLEN=-1;|SVLEN=-2;/ );
if ( my #numbers_of_interest = ( $line =~ m/\d+\/\d+:0,(\d+)/g ) ) {
next unless List::Util::min(#numbers_of_interest) < 20;
}
print $fhout $line;
}
close($fhin);
close($fhout);
}
filter_lines( $my_vcf, $my_new_vcf );
Because I have no exact input line the pattern for the #numbers_of_interest might be a bit inaccurate and need improvement. There's also much room for optimization in case this code turns out to be slow.
I have two CSV files. The first is a list file, it contains the ID and names. For example
1127100,Acanthocolla cruciata
1127103,Acanthocyrta haeckeli
1127108,Acanthometra fusca
The second is what I want to exchange and extract the line by the first number if a match is found. The first column of numbers correspond in each file. For example
1127108,1,0.60042
1127103,1,0.819671
1127100,2,0.50421,0.527007
10207,3,0.530422,0.624466
So I want to end up with CSV file like this
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
I tried Perl but opening two files at once proved messy. So I tried converting one of the CSV files to a string and parse it that way, but didnt work. But then I was reading about grep and other one-liners but I am not familiar with it. Would this be possible with grep?
This is the Perl code I tried
use strict;
use warnings;
open my $csv_score, '<', "$ARGV[0]" or die qq{Failed to open "$ARGV[0]" for input: $!\n};
open my $csv_list, '<', "$ARGV[1]" or die qq{Failed to open "$ARGV[1]" for input: $!\n};
open my $out, ">$ARGV[0]_final.txt" or die qq{Failed to open for output: $!\n};
my $string = <$csv_score>;
while ( <$csv_list> ) {
my ($find, $replace) = split /,/;
$string =~ s/$find/$replace/g;
if ($string =~ m/^$replace/){
print $out $string;
}
}
close $csv_score;
close $csv_list;
close $out;
The general purpose text processing tool that comes with all UNIX installations is named awk:
$ awk -F, -v OFS=, 'NR==FNR{m[$1]=$2;next} $1=m[$1]' file1 file2
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
Your code was failing because you only read the first line from the $csv_score file, and you tried to print $string every time it is changed. You also failed to remove the newline from the end of the lines from your $csv_list file. If you fix those things then it looks like this
use strict;
use warnings;
open my $csv_score, '<', "$ARGV[0]" or die qq{Failed to open "$ARGV[0]" for input: $!\n};
open my $csv_list, '<', "$ARGV[1]" or die qq{Failed to open "$ARGV[1]" for input: $!\n};
open my $out, ">$ARGV[0]_final.txt" or die qq{Failed to open for output: $!\n};
my $string = do {
local $/;
<$csv_score>;
};
while ( <$csv_list> ) {
chomp;
my ( $find, $replace ) = split /,/;
$string =~ s/$find/$replace/g;
}
print $out $string;
close $csv_score;
close $csv_list;
close $out;
output
Acanthometra fusca,1,0.60042
Acanthocyrta haeckeli,1,0.819671
Acanthocolla cruciata,2,0.50421,0.527007
10207,3,0.530422,0.624466
However that's not a safe way of doing things, as IDs may be found elsewhere than at the start of a line
I would build a hash out of the $csv_list file like this, which also makes the program more concise
use strict;
use warnings;
use v5.10.1;
use autodie;
my %ids;
{
open my $fh, '<', $ARGV[1];
while ( <$fh> ) {
chomp;
my ($id, $name) = split /,/;
$ids{$id} = $name;
}
}
open my $in_fh, '<', $ARGV[0];
open my $out_fh, '>', "$ARGV[0]_final.txt";
while ( <$in_fh> ) {
s{^(\d+)}{$ids{$1} // $1}e;
print $out_fh $_;
}
The output is identical to that of the first program above
The problem with the code as written is that you only do this once:
my $string = <$csv_score>;
This reads one line from $csv_score and you don't ever use the rest.
I would suggest that you need to:
Read the first file into a hash
Iterate the second file, and do a replace on the first column.
using Text::CSV is generally a good idea for processing it, but it doesn't seem to be necessary for your example.
So:
#!/usr/bin/env perl
use strict;
use warnings;
use Text::CSV;
use Data::Dumper;
my $csv = Text::CSV->new( { binary => 1 } );
my %replace;
while ( my $row = $csv->getline( \*DATA ) ) {
last if $row->[0] =~ m/NEXT/;
$replace{ $row->[0] } = $row->[1];
}
print Dumper \%replace;
my $search = join( "|", map {quotemeta} keys %replace );
$search =~ qr/($search)/;
while ( my $row = $csv->getline( \*DATA ) ) {
$row->[0] =~ s/^($search)$/$replace{$1}/;
$csv->print( \*STDOUT, $row );
print "\n";
}
__DATA__
1127100,Acanthocolla cruciata
1127103,Acanthocyrta haeckeli
1127108,Acanthometra fusca
NEXT
1127108,1,0.60042
1127103,1,0.819671
1127100,2,0.50421,0.527007
10207,3,0.530422,0.624466
Note - this still prints that last line of your source content:
"Acanthometra fusca ",1,"0.60042 "
"Acanthocyrta haeckeli ",1,"0.819671 "
"Acanthocolla cruciata ",2,0.50421,"0.527007 "
(Your data contained whitespace, so Text::CSV wraps it in quotes)
If you want to discard that, then you could test if the replace actually occurred:
if ( $row->[0] =~ s/^($search)$/$replace{$1}/ ) {
$csv->print( \*STDOUT, $row );
print "\n";
}
(And you can of course, keep on using split /,/ if you're sure you won't have any of the whacky things that CSV supports normally).
I would like to provide a very different approach.
Let's say you are way more comfortable with databases than with Perl's data structures. You can use DBD::CSV to turn your CSV files into kind of relational databases. It uses Text::CSV under the hood (hat tip to #Sobrique). You will need to install it from CPAN as it's not bundled in the default DBI distribution though.
use strict;
use warnings;
use Data::Printer; # for p
use DBI;
my $dbh = DBI->connect( "dbi:CSV:", undef, undef, { f_ext => '.csv' } );
$dbh->{csv_tables}->{names} = { col_names => [qw/id name/] };
$dbh->{csv_tables}->{numbers} = { col_names => [qw/id int float/] };
my $sth_select = $dbh->prepare(<<'SQL');
SELECT names.name, numbers.int, numbers.float
FROM names
JOIN numbers ON names.id = numbers.id
SQL
# column types will be silently discarded
$dbh->do('CREATE TABLE result ( name CHAR(255), int INTEGER, float INTEGER )');
my $sth_insert =
$dbh->prepare('INSERT INTO result ( name, int, float ) VALUES ( ?, ?, ? ) ');
$sth_select->execute;
while (my #res = $sth_select->fetchrow_array ) {
p #res;
$sth_insert->execute(#res);
}
What this does is set up column names for the two tables (your CSV files) as those do not have a first row with names. I made the names up based on the data types. It will then create a new table (CSV file) named result and fill it by writing one row at a time.
At the same time it will output data (for debugging purposes) to STDERR through Data::Printer.
[
[0] "Acanthocolla cruciata",
[1] 2,
[2] 0.50421
]
[
[0] "Acanthocyrta haeckeli",
[1] 1,
[2] 0.819671
]
[
[0] "Acanthometra fusca",
[1] 1,
[2] 0.60042
]
The resulting file looks like this:
$ cat scratch/result.csv
name,int,float
"Acanthocolla cruciata",2,0.50421
"Acanthocyrta haeckeli",1,0.819671
"Acanthometra fusca",1,0.60042
The question that follows is a made up simplified example of a more complex problem that I'm trying to solve. I would like to preserve the structure of the code, especially the use of the %hash to store the outcomes for each patient but I do not need to read the data file into memory (but I cannot find a way of reading my csv data file line by line from the end.)
My sample data is made up of events that occur to patients. A patient can be added to the study (Event=B) or he can die (Event=D) or exit the study(Event=F.) Death and Exit are the only two possible outcomes for each patient.
For each event I have the date of occurrence (in hours from given point in time), the unique ID number of each patient, the event and the Outcome (a field set to 0 for every patient.)
I'm trying to write a code that will change the input file by putting next to each addition of a new patient, what is his eventual outcome (death or exit.)
In order to do so, I read the file from the end, and whenever I encounter a death or exit of a patient, I populate a hash that matches patient ID with outcome. When I encounter an event telling me that a new patient has been added to the study, I then match his ID with those in the hash and change the value of "Outcome" from 0 to either D or F.
I have been able to write a code that reads the file from bottom and then creates a new modified file with the updated value for Outcome. The problem is that since I read the input file from bottom to top and print each line after reading it, the output file is in reversed order and I do not know how to change this. Also, ideally I don't want to create a new file bu I would like to simply modify the input one. However, I have failed with every attempt to do so.
Sample data:
Data,PatientNumber,Event,Outcome
25201027,562962838335407,B,0
25201028,562962838335408,B,0
25201100,562962838335407,D,0
25201128,562962838335408,F,0
My code:
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
open (my $fh_input, "<", "mini_test2.csv")
or die "cannot open > mini_test2.csv: $!";
my #lines = <$fh_input>;
close $fh_input;
open (my $fh_output, ">>", "Revised_mini_test2.csv")
or die "cannot open > Revised_mini_test2.csv: $!";
my $length = scalar(#lines);
my %outcome;
my #input_variables;
for (my $i = 1; $i < #lines; $i++){
chomp($lines[$length-$i]);
#input_variables=split(/,/, $lines[$length - $i]);
if ($input_variables[2] eq "D" || $input_variables[2] eq "F"){
$outcome{$input_variables[1]} = $input_variables[2];
my $line = join(",", #input_variables);
print $fh_output $line . "\n";
}
elsif($input_variables[2] eq "B") {
$input_variables[3]=$outcome{$input_variables[1]};
my $line = join(",", #input_variables);
print $fh_output $line . "\n";
}
else{
# necessary since the actual data has many more possible "Events"
my $line = join(",", #input_variables);
print $fh_output $line . "\n";
}
}
close $fh_output;
EDIT: desired output should be
Data,PatientNumber,Event,Outcome
25201027,562962838335407,B,D
25201028,562962838335408,B,F
25201100,562962838335407,D,0
25201128,562962838335408,F,0
Also, an additional complication is that the unique patient ID after the exit of a patient gets re-used. This means that I cannot do a 1st pass and store the outcome for each patient and a 2nd one to update the values of Outcome.
EDIT 2: let me clarify that when I say that each patient has a "unique ID" I mean that there cannot be in the study, at the same time, two patients with the same ID. However, if a patient exits the study, his ID gets re-used.
Update
I have just read your additional information that patient numbers are re-used once they exit the study. Why you would design a system like that I don't know, but there it is
It becomes far harder to write something straightforward without reading the file into an array, so that's what I have done here
use strict;
use warnings;
use 5.010;
use autodie;
open my $fh, '<', 'mini_test2.csv';
my #data;
while ( <$fh> ) {
chomp;
push #data, [ split /,/ ];
}
my %outcome;
for ( my $i = $#data; $i > 0; --$i ) {
my ($patient_number, $event) = #{$data[$i]}[1,2];
if ( $event =~ /[DF]/ ) {
$outcome{$patient_number} = $event;
}
elsif ( $event =~ /[B]/ ) {
$data[$i][3] = delete $outcome{$patient_number} // 0;
}
}
print join(',', #$_), "\n" for #data;
output
Data,PatientNumber,Event,Outcome
25201027,562962838335407,B,D
25201028,562962838335408,B,F
25201100,562962838335407,D,0
25201128,562962838335408,F,0
There are a few ways to approach this. I have chosen to take two passes through the file, first accumulating the outcome for each patient in a hash, and then replacing all the outcome fields in the B records
use strict;
use warnings;
use 5.010;
use autodie;
use Fcntl ':seek';
my %outcome;
open my $fh, '<', 'mini_test2.csv';
<$fh>; # Drop header
while ( <$fh> ) {
chomp;
my #fields = split /,/;
my ($patient_number, $event) = #fields[1,2];
if ( $event =~ /[DF]/ ) {
$outcome{$patient_number} = $event;
}
}
seek $fh, 0, SEEK_SET; # Rewind
print scalar <$fh>; # Copy header
while ( <$fh> ) {
chomp;
my #fields = split /,/;
my ($patient_number, $event) = #fields[1,2];
if ( $event !~ /[DF]/ ) {
$fields[3] = $outcome{$patient_number} // 0;
}
print join(',', #fields), "\n";
}
output
Data,PatientNumber,Event,Outcome
25201027,562962838335407,B,D
25201028,562962838335408,B,F
25201100,562962838335407,D,0
25201128,562962838335408,F,0
What we can do is instead of printing out the line at each stage, we'll write it back to the array of lines. Then we can just print them out at the end.
for (my $i=$#lines; i>=0; i--)
{
chomp $lines[$i];
#input_variables = split /,/, $lines[$i];
if ($input_variables[2] eq "D" || $input_variables[2] eq "F")
{
$outcome{$input_variables[1]}=$input_variables[2];
}else
{
$input_variables[3]=$outcome{$input_variables[1]};
}
$line[$i] = join ",", #input_variables;
}
$, = "\n"; #Make list seperator for printing a newline.
print $fh_output #lines;
As for the second question of modifying the original file. It is possible to open a file for both reading and writing using modes "+<", "+>", or "+>>". Don't do this! It is error prone as you must replace data character by character.
The standard way to "modify" an existing file is to rename it, read from the renamed file, write to a new file with the original name, and delete the temp file.
my $file_name = "mini_test2.csv";
my $tmp_file_name = $file_name . ".tmp";
rename $file_name, $tmp_file_name;
open (my $fh_input, "<", $tmp_file_name)
or die "cannot open > $tmp_file_name: $!";
open (my $fh_output, ">>", $file_name)
or die "cannot open > $file_name: $!";
#Your code to process the data.
close $fh_input;
close $fh_output;
#delete the temp file
unlink $tmp_file_name;
But, in your case, you slurp all of the data into memory right away. Just open for writing that clobbers existing files
open (my $fh_output, ">", "mini_test2.csv")
or die "cannot open > mini_test2.csv: $!";
I am a beginner with Perl and I want to merge the content of two text files.
I have read some similar questions and answers on this forum, but I still cannot resolve my issues
The first file has the original ID and the recoded ID of each individual (in the first and fourth columns)
The second file has the recoded ID and some information on some of the individuals (in the first and second columns).
I want to create an output file with the original, recoded and information of these individuals.
This is the perl script I have created so far, which is not working.
If anyone could help it would be very much appreciated.
use warnings;
use strict;
use diagnostics;
use vars qw( #fields1 $recoded $original $IDF #fields2);
my %columns1;
open (FILE1, "<file1.txt") || die "$!\n Couldn't open file1.txt\n";
while ($_ = <FILE1>)
{
chomp;
#fields1=split /\s+/, $_;
my $recoded = $fields1[0];
my $original = $fields1[3];
my %columns1 = (
$recoded => $original
);
};
open (FILE2, "<file2.txt") || die "$!\n Couldnt open file2.txt \n";
for ($_ = <FILE2>)
{
chomp;
#fields2=split /\s+/, $_;
my $IDF= $fields2[0];
my $F=$fields2[1];
my %columns2 = (
$F => $IDF
);
};
close FILE1;
close FILE2;
open (FILE3, ">output.txt") ||die "output problem\n";
for (keys %columns1) {
if (exists ($columns2{$_}){
print FILE3 "$_ $columns1{$_}\n"
};
}
close FILE3;
One problem is with scoping. In your first loop, you have a my in front of $column1 which makes it local to the loop and will not be in scope when you next the loop. So the %columns1 (which is outside of the loop) does not have any values set (which is what I suspect you want to set). For the assignment, it would seem to be easier to have $columns1{$recorded} = $original; which assigns the value to the key for the hash.
In the second loop you need to declare %columns2 outside of the loop and possibly use the above assignment.
For the third loop, in the print you just need add $columns2{$_} in front part of the string to be printed to get the original ID to be printed before the recorded ID.
Scope:
The problem is with scope of the hash variables you have defined. The scope of the variable is limited to the loop inside which the variable has been defined.
In your code, since %columns1 and %columns2 are used outside the while loops. Hence, they should be defined outside the loops.
Compilation error : braces not closed properly
Also, in the "if exists" part, the open-and-closed braces symmetry is affected.
Here is your code with the required corrections made:
use warnings;
use strict;
use diagnostics;
use vars qw( #fields1 $recoded $original $IDF #fields2);
my (%columns1, %columns2);
open (FILE1, "<file1.txt") || die "$!\n Couldn't open CFC_recoded.txt\n";
while ($_ = <FILE1>)
{
chomp;
#fields1=split /\s+/, $_;
my $recoded = $fields1[0];
my $original = $fields1[3];
%columns1 = (
$recoded => $original
);
}
open (FILE2, "<file2.txt") || die "$!\n Couldnt open CFC_F.xlsx \n";
for ($_ = <FILE2>)
{
chomp;
#fields2=split /\s+/, $_;
my $IDF= $fields2[0];
my $F=$fields2[1];
%columns2 = (
$F => $IDF
);
}
close FILE1;
close FILE2;
open (FILE3, ">output.txt") ||die "output problem\n";
for (keys %columns1) {
print FILE3 "$_ $columns1{$_} \n" if exists $columns2{$_};
}
close FILE3;