Perl: string formatting in tab delimited file - perl

I have no background in programming whatsoever, so I would appreciate it if you would explain how and why any code you recommend should be written the way it is.
I have a data matrix 2,000+ samples, and need to do the following manipulate the format in one column.
I would also like to manipulate the format of one of the columns so that it is easier to merge with my other matrix. For example, one column is known as sample number (column #16). The format is currently similar to ABCD-A1-A0SD-01A-11D-A10Y-09, yet I would like to change it to be formatted to the following ABCD-A1-A0SD-01A. This will allow me to have it in the right format so that I can merge it with another matrix. I seem to not be able to find any information on how to proceed with this step.
The sample input should look like this:
ABCD-A1-A0SD-01A-11D-A10Y-09
ABCD-A1-A0SD-01A-11D-A10Y-09
ABCD-A1-A0SE-01A-11D-A10Y-09
ABCD-A1-A0SE-01A-11D-A10Y-09
ABCD-A1-A0SF-01A-11D-A10Y-09
ABCD-A1-A0SH-01A-11D-A10Y-09
ABCD-A1-A0SI-01A-11D-A10Y-09
I want the last three extensions removed. The output sample should look like this:
ABCD-A1-A0SD-01A
ABCD-A1-A0SD-01A
ABCD-A1-A0SE-01A
ABCD-A1-A0SE-01A
ABCD-A1-A0SF-01A
ABCD-A1-A0SH-01A
ABCD-A1-A0SI-01A
Finally, the matrix that I want to merge with has a different layout, in other words the number of columns and rows are different. This is a issue when I tackle the next step which is merging the two matrices together. The original matrix has about 52 columns and 2,000+ rows, whereas the merging matrix only has 15 column and 467 rows.
Each row of the original matrix has mutational information for a patient. This means that the same patient with the same ID might appear many times. The second matrix contains the patient information, so no patients are repeated in that matrix. When merging the matrix, I want to make sure that every patient mutation (each row) is matched with its corresponding information from the merging matrix.
My sample code:
#!/usr/bin/perl
use strict;
use warnings;
my $file = 'sorted_samples_2.txt';
open(INFILE, $file) or die "Can't open file: $!\n";
open(my $outfile, '>', 'sorted_samples_changed.txt');
foreach my $line (<INFILE>) {
print "The input line is $line\n";
my #columns = split('\t', $line);
($columns[15]) = $columns[15]=~/:((\w\w\w\w-\w\d-\w|\w\w-\d\d\w)+)$/;
printf $outfile "#columns/n";
}
Issues: The code deletes the header and deleted the string in column 16.

A few issues about your code:
Good job on include use strict; and use warnings;. Keep doing that
Anytime you're doing file or directory processing, include use autodie; as well.
Always use lexical file handles $infh instead of globs INFILE.
Use the 3 parameter form of open.
Always process a file line by line using a while loop. Using a for loop loads the entire file into memory
Don't forget to chomp your input from a file.
Use the line number variable $. if you want special logic for your header
The first parameter of split is a pattern. Use /\t/. The only exception to this is ' ' which has special meaning. Currently your introducing a bug by using a single quoted string.
When altering a value with a regex, try to focus on what you DO want instead of what you DON'T. In this case it looks like you want 4 groups separated by dashes, and then truncate the rest. Focus on matching those groups.
Don't use printf when you mean print.
The following applies these fixes to your script:
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
my $infile = 'sorted_samples_2.txt';
my $outfile = 'sorted_samples_changed.txt';
open my $infh, '<', $infile;
open my $outfh, '>', $outfile;
while (my $line = <$infh>) {
chomp $line;
my #columns = split /\t/, $line;
if ($. > 1) {
$columns[15] =~ s/^(\w{4}-\w\d-\w{4}-\w{3}).*/$1/
or warn "Unable to fix column at line $.";
}
print $outfh join("\t", #columns), "\n";
}

You need to define scope for your variables with 'my' in declaration itself when you use 'use strict'.
In your case, you should use my #sort = sort {....} in first line and
you should have an array reference $t defined somewhere to de-reference it in second line. You don't have #array declared anywhere in this code, that is the reason you got all those errors. Make sure you understand what you are doing before you do it.

Related

Check whether a field from a line of text line matches a value

I have been using the following Perl code to extract text from multiple text files. It works fine.
Example of a couple of lines in one of the input files:
Fa0/19 CUTExyz notconnect 129 half 100 10/100BaseTX
Fa0/22 xyz MLS notconnect 1293 half 10 10/100BaseTX
What I need is to match the numbers in each line exactly (i.e. 129 is not matched by 1293) and print the corresponding lines.
It would also be nice to match a range of numbers leaving specific numbers out i.e. match 2 through 10 but not 11 the 12 through 20
#!/perl/bin/perl
use warnings;
my #files = <c:/perl64/files/*>;
foreach $file ( #files ) {
open( FILE, "$file" );
while ( $line = <FILE> ) {
print "$file $line" if $line =~ /123/n;
}
close FILE;
}
Thank you for the suggestions, but can it can be done using the code structure above?
I suggest that you take a look at perldoc perlre.
You need to anchor your regex pattern. The easiest way is probably using \b which is a zero-width boundary between alphanumerics and non-alphanumerics.
#!/perl/bin/perl
use warnings;
use strict;
foreach my $file ( glob "c:/perl64/files/*" ) {
open( my $input, '<', $file ) or die $!;
while (<$input>) {
print "$file $_" if m/\b123\b/;
}
close $input;
}
Note - you should use three-argument open with lexical file handles as above, because it is better practice.
I've also removed the n pattern modifier, as it appears redundant.
Following your edit though, to give us some source data. I'd suggest the solution is not to use a regex - your source data looks space delimited. (Maybe those are tabs?).
So I'd suggest you're better off using split and selecting the field you want, and testing it numerically, because you mention matching ranges. This is not a good fit for regexes because they don't understand the numeric content.
Instead:
while ( <$input> ) {
print if (split)[-4] == 129;
}
Note - I use -4 in the split, which indexes from the end of the list.
This is because column 3 contains spaces, so splitting on whitespace is going to produce the wrong result unless we count down from the end of the array. Using a negative index we get the right field each time.
If your data is tab separated then you could use chomp and split /\t/. Or potentially split on /\s{2,}/ to split on 2-or-more spaces
But by selecting the field, you can do numeric tests on it, like
if $fields[-4] > 100 and $fields[-4] < 200
etc.
I hope you don't get the answers you're asking for, which discard best practice because of your unfamiliarity with Perl. It is inappropriate to ask how to write an ugly solution because proper Perl is beyond your reach
As has been said repeatedly on this site, if you don't know how to do a job then you should hire someone who does know and pay them for their work. No other profession that I know has the expectation of getting quality work done for free
Here's a few notes on your code. Wherever you have learned your techniques, you have been looking at a very outdated resource
Do you really have a root directory perl, so that your compiler is /perl/bin/perl? That's very unusual, and there is no need to use a shebang line in Windows
You must always add use strict and use warnings 'all' at the top of every Perl program you write, and declare all of your variables using my as close as possible to their first point of use. For some reason you do this with #files but not with $file
It is better to replace <c:/perl64/files/*> with glob 'C:/perl64/files/*'. Otherwise the code is less clear because Perl overloads the <> operator
Don't put variable names inside double quotes. It is unnecessary at best, and may cause bugs. So "$file" should be $file
Always use the three-parameter version of open, so that the second parameter is the open mode
Don't use global file handles. And always test whether the file has been opened correctly, dying with a message including $!—the reason for the failure—if the open fails
open( FILE, "$file" )
should be something like
open my $fh, '<', $file or die qq{Unable to open "$file" for input: $!}
Don't rely on regex patterns for everything. In this case it looks like split would be a better option, or perhaps unpack if your records have fixed-width fields. In my solution below I have used split on "more than one space", but if your real data is different from what you have shown (tab-delimited?) then this is not going to work
Note that Fa0/129 will also be matched by your current approach
This Perl program filters your data, printing lines where the fourth field $lines[3] (delineated by more than one whitespace character) is numerically equal to 129
The output shown is produced when the input is the single file splitn.txt, containing the data shown in your question
use strict;
use warnings 'all';
for my $file ( glob 'C:/perl64/files/*' ) {
open my $fh, '<', $file or die qq{Unable to open "$file" for input: $!};
while ( my $line = <$fh> ) {
chomp;
my #fields = split /\s\s+/, $line;
print "$file $line" if $fields[3] == 129;
}
}
output
splitn.txt Fa0/19 CUTExyz notconnect 129 half 100 10/100BaseTX
Your question is unclear. When you say:
What I need is to match numbers in the on each line exactly
That could mean a couple of things. It could mean that each line contains nothing but a single number which you want to match. In that case, using == is probably better than using a regular expression. Or it could mean that you have lots of text on a line and you only want to match complete numbers. In that case you should use \b (the "word boundary" anchor) - /\b123\b/.
If you're clearer in your questions (perhaps by giving us sample input) then people won't have to guess at your meaning.
A few more points on your code:
Always include both use strict and use warnings.
Always check the return value from open() and take appropriate action on failure.
Use lexical filehandles and 3-arg version of open().
No need to quote $file in your open() call.
Using $_ can simplify your code.
/n on the match operator has no effect unless your regex contains parentheses.
Putting that all together (and assuming my second interpretation of your question is correct), your code could look like this:
#!/perl/bin/perl
use strict;
use warnings;
my #files = <c:/perl64/files/*>;
foreach my $file (#files) {
open my $file_h, '<', $file
or die "Can't open $file: $!";
while (<$file_h>) {
print "$file $_\n" if /\b123\b/;
}
# No need to close $file_h as it is closed
# automatically when the variable goes out
# of scope.
}

Defining Hash Values and Keys and Using Multiple Different Files

I am struggling with writing a Perl program for several tasks. I have tried really hard to review all errors since I am a beginner and want to understand my mistakes, but I am failing. Hopefully, my description of the tasks and my deficient program so far will not be confusing.
In my current directory, I have a variable number of “.txt.” files. (I can have 4, 5, 8, or any number of files. However, I don’t think I will get more that 17 files.) The format of the “.txt” files is the same. There are six columns, which are separated by white space. I only care about two columns in these files: the second column, which is the coral reef regionID (made up of letters and numbers), and the fifth column, which is the p-value. The number of rows in each file is undetermined. What I need to do is find all the common regionIDs in all .txt files and print these common regions to an outfile. However, before printing, I must sort them.
The following is my program so far, but I have received error messages, which I have included after the program. Thus, my definitions of variables are the major problems. I really appreciate any suggestions for writing the program and thank you for your patience with a beginner like me.
UPDATE: I have declared the variables as suggested. After reviewing my program, two syntax errors appear.
syntax error at oreg.pl line 19, near "$hash{"
syntax error at oreg.pl line 23, near "}"
Execution of oreg.pl aborted due to compilation errors.
Here is an excerpt of the edited program that includes where said errors are.
#!/user/bin/perl
use strict;
use warnings;
# Trying to read files in #txtfiles for reading into hash
foreach my $file (#txtfiles) {
open(FH,"<$file") or die "Can't open $file\n";
while(chomp(my $line = <FH>)){
$line =~ s/^\s+//;
my #IDp = split(/\s+/, $line); # removing whitespace
my $i = 0;
# trying to define values and keys in terms of array elements in IDp
my $value = my $hash{$IDp[$i][1]};
$value .= "$IDp[$i][4]"; # confused here at format to append p-values
$i++;
}
}
close(FH);
These are past errors:
Global symbol "$file" requires explicit package name at oreg.pl line 13.
Global symbol "$line" requires explicit package name at oreg.pl line 16.
#[And many more just like that...]
Execution of oreg.pl aborted due to compilation errors.
You didn't declare $file.
foreach my $file (#txtfiles) {
You didn't declare $line.
while(chomp(my $line = <FH>)){
etc.
use strict;
use warnings;
my %region;
foreach my $file (#txtfiles) {
open my $FH, "<", $file or die "Can't open $file \n";
while (my $line = <$FH>) {
chomp($line);
my #values = split /\s+/, $line;
my $regionID = $values[1]; # 2nd column, per your notes
my $pvalue = $values[4]; # 5th column, per your notes
$region{$regionID} //= []; # Inits this value in the hash to an empty arrayref if undefined
push #{$region{$regionID}}, $pvalue;
}
}
# Now sort and print using %region as needed
At the end of this code, %region is a hash where the keys are the region IDs and the values are array references containing the various pvalues.
Here's a few snippets that may help you with next steps:
keys %regions will give you a list of region id values.
my #pvals = #{$regions{SomeRegionID}} will give you the list of pvalues for SomeRegionID
$regions{SomeRegionID}->[0] will give you the first pvalue for that region.
You may want to check out Data::Printer or Data::Dumper - they are CPAN modules that will let you easily print out your data structure, which might help you understand what's going on in your code.

A Perl script to process a CSV file, aggregating properties spread over multiple records

Sorry for the vague question, I'm struggling to think how to better word it!
I have a CSV file that looks a little like this, only a lot bigger:
550672,1
656372,1
766153,1
550672,2
656372,2
868194,2
766151,2
550672,3
868179,3
868194,3
550672,4
766153,4
The values in the first column are a ID numbers and the second column could be described as a property (for want of a better word...). The ID number 550672 has properties 1,2,3,4. Can anyone point me towards how I can begin solving how to produce strings such as that for all the ID numbers? My ideal output would be a new csv file which looks something like:
550672,1;2;3;4
656372,1;2
766153,1;4
etc.
I am very much a Perl baby (only 3 days old!) so would really appreciate direction rather than an outright solution, I'm determined to learn this stuff even if it takes me the rest of my days! I have tried to investigate it myself as best as I can, although I think I've been encumbered by not really knowing what to really search for. I am able to read in and parse CSV files (I even got so far as removing duplicate values!) but that is really where it drops off for me. Any help would be greatly appreciated!
I think it is best if I offer you a working program rather than a few hints. Hints can only take you so far, and if you take the time to understand this code it will give you a good learning experience
It is best to use Text::CSV whenever you are processing CSV data as all the debugging has already been done for you
use strict;
use warnings;
use Text::CSV;
my $csv = Text::CSV->new;
open my $fh, '<', 'data.txt' or die $!;
my %data;
while (my $line = <$fh>) {
$csv->parse($line) or die "Invalid data line";
my ($key, $val) = $csv->fields;
push #{ $data{$key} }, $val
}
for my $id (sort keys %data) {
printf "%s,%s\n", $id, join ';', #{ $data{$id} };
}
output
550672,1;2;3;4
656372,1;2
766151,2
766153,1;4
868179,3
868194,2;3
Firstly props for seeking an approach not a solution.
As you've probably already found with perl, There Is More Than One Way To Do It.
The approach I would take would be;
use strict; # will save you big time in the long run
my %ids # Use a hash table with the id as the key to accumulate the properties
open a file handle on csv or die
while (read another line from the file handle){
split line into ID and property variable # google the split function
append new property to existing properties for this id in the hash table # If it doesn't exist already, it will be created
}
foreach my $key (keys %ids) {
deduplicate properties
print/display/do whatever you need to do with the result
}
This approach means you will need to iterate over the whole set twice (once in memory), so depending on the size of the dataset that may be a problem.
A more sophisticated approach would be to use a hashtable of hashtables to do the de duplication in the intial step, but depending on how quickly you want/need to get it working, that may not be worthwhile in the first instance.
Check out
this question
for a discussion on how to do the deduplication.
Well, open the file as stdin in perl, assume each row is of two columns, then iterate over all lines using left column as hash identifier, and gathering right column into an array pointed by a hash key. At the end of input file you'll get a hash of arrays, so iterate over it, printing a hash key and assigned array elements separated by ";" or any other sign you wish.
and here you go
dtpwmbp:~ pwadas$ cat input.txt
550672,1
656372,1
766153,1
550672,2
656372,2
868194,2
766151,2
550672,3
868179,3
868194,3
550672,4
766153,4
dtpwmbp:~ pwadas$ cat bb2.pl
#!/opt/local/bin/perl
my %hash;
while (<>)
{
chomp;
my($key, $value) = split /,/;
push #{$hash{$key}} , $value ;
}
foreach my $key (sort keys %hash)
{
print $key . "," . join(";", #{$hash{$key}} ) . "\n" ;
}
dtpwmbp:~ pwadas$ cat input.txt | perl -f bb2.pl
550672,1;2;3;4
656372,1;2
766151,2
766153,1;4
868179,3
868194,2;3
dtpwmbp:~ pwadas$
perl -F"," -ane 'chomp($F[1]);$X{$F[0]}=$X{$F[0]}.";".$F[1];if(eof){for(keys %X){$X{$_}=~s/;//;print $_.",".$X{$_}."\n"}}'
Another (not perl) way which incidentally is shorter and more elegant:
#!/opt/local/bin/gawk -f
BEGIN {FS=OFS=",";}
NF > 0 { IDs[$1]=IDs[$1] ";" $2; }
END { for (i in IDs) print i, substr(IDs[i], 2); }
The first line (after specifying the interpreter) sets the input FIELD SEPARATOR and the OUTPUT FIELD SEPARATOR to the comma. The second line checks of we have more than zero fields and if you do it makes the ID ($1) number the key and $2 the value. You do this for all lines.
The END statement will print these pairs out in an unspecified order. If you want to sort them you have to option of asorti gnu awk function or connecting the output of this snippet with a pipe to sort -t, -k1n,1n.

Finding an amino acid sequence in a file

I have a FASTA file of a protein sequence. I want to find if the
sequence hxxhcxc is present in the file or not, if yes, then print the
stretch. Here, h=hydrophobic, c=charged, x=any (including remaining) residue/s.
How to do this in Perl?
What I could think of is make 3 arrays—of hydrophobic, charged and all residues.
Compare each array with the file having the FASTA sequence. I can't think of anything beyond this, especially how to maintain the order—that's the main thing. I am a beginner in Perl, so please make the explanation as simple as possible.
PS: Since this is just one sequence, I can simply copy the content to a .txt file, there is no compulsion to use a fasta file (in this case). Hydrophobic and charged are residues(amino acids)- there are 9 hydrophobic and 5 charged residues. It is the name of the amino acid that is in upper case single letter as you mentioned. So what I want to do is find a sequence: hydrophobic, any, any, hydrophobic, charged, any, charged (hxxhcxc) in that order in the protein sequence (.txt file/fasta file). I struggled to re frame my question-hope I'm a little better now.
I'm not familiar with Fasta files, but regular expressions certainly seem like the way to go here.
In words
If you open the file for reading, you can process the file line by line, print-ing only those lines if they match the regular expression you specified.
In code
use strict;
use warnings;
use autodie;
open my $fh, '<', 'file.fasta'; # Open filehandle in read mode
while ( my $line = <$fh> ) { # Loop over line by line
print $line # Print line if it matches pattern
if $line =~ /h..hc.c/; # '.' in a regular expression matches
# (almost) anything
}
close $fh; # Close filehandle
So, you'll have to decide which are the "hydrophobic" amino acids, but lets just start with either V(aline),I(soleucine),L(eucine),F,W, or C.
And the charged amino acids are E,D,R or K. Using this you can define
a regex (you'll see it below)
If you just have the whole sequence in a text file parse it like this:
#!/usr/bin/perl
open(IN, "yourfile.txt") || die("couldn't open the file: $!");
$sequence = "";
while(<IN>) {
chomp();
$sequence .= $_;
}
if($sequence =~ /[VILFWC]..[VILFWC][EDRK].[EDRK]/) {
print "Found it!\n";
} else {
print "Not there\n";
}

How can I combine files into one CSV file?

If I have one file FOO_1.txt that contains:
FOOA
FOOB
FOOC
FOOD
...
and a lots of other files FOO_files.txt. Each of them contains:
1110000000...
one line that contain 0 or 1 as the number of FOO1 values (fooa,foob, ...)
Now I want to combine them to one file FOO_RES.csv that will have the following format:
FOOA,1,0,0,0,0,0,0...
FOOB,1,0,0,0,0,0,0...
FOOC,1,0,0,0,1,0,0...
FOOD,0,0,0,0,0,0,0...
...
What is the simple & elegant way to conduct that
(with hash & arrays -> $hash{$key} = \#data ) ?
Thanks a lot for any help !
Yohad
If you can't describe a your data and your desired result clearly, there is no way that you will be able to code it--taking on a simple project is a good way to get started using a new language.
Allow me to present a simple method you can use to churn out code in any language, whether you know it or not. This method only works for smallish projects. You'll need to actually plan ahead for larger projects.
How to write a program:
Open up your text editor and write down what data you have. Make each line a comment
Describe your desired results.
Start describing the steps needed to change your data into the desired form.
Numbers 1 & 2 completed:
#!/usr/bin perl
use strict;
use warnings;
# Read data from multiple files and combine it into one file.
# Source files:
# Field definitions: has a list of field names, one per line.
# Data files:
# * Each data file has a string of digits.
# * There is a one-to-one relationship between the digits in the data file and the fields in the field defs file.
#
# Results File:
# * The results file is a CSV file.
# * Each field will have one row in the CSV file.
# * The first column will contain the name of the field represented by the row.
# * Subsequent values in the row will be derived from the data files.
# * The order of subsequent fields will be based on the order files are read.
# * However, each column (2-X) must represent the data from one data file.
Now that you know what you have, and where you need to go, you can flesh out what the program needs to do to get you there - this is step 3:
You know you need to have the list of fields, so get that first:
# Get a list of fields.
# Read the field definitions file into an array.
Since it is easiest to write CSV in a row oriented fashion, you will need to process all your files before generating each row. So you'll need someplace to store the data.
# Create a variable to store the data structure.
Now we read the data files:
# Get a list of data files to parse
# Iterate over list
# For each data file:
# Read the string of digits.
# Assign each digit to its field.
# Store data for later use.
We've got all the data in memory, now write the output:
# Write the CSV file.
# Open a file handle.
# Iterate over list of fields
# For each field
# Get field name and list of values.
# Create a string - comma separated string with field name and values
# Write string to file handle
# close file handle.
Now you can start converting comments into code. You could have anywhere from 1 to 100 lines of code for each comment. You may find that something you need to do is very complex and you don't want to take it on at the moment. Make a dummy subroutine to handle the complex task, and ignore it until you have everything else done. Now you can solve that complex, thorny sub-problem on it's own.
Since you are just learning Perl, you'll need to hit the docs to find out how to do each of the subtasks represented by the comments you've written. The best resource for this kind of work is the list of functions by category in perlfunc. The Perl syntax guide will come in handy too. Since you'll need to work with a complex data structure, you'll also want to read from the Data Structures Cookbook.
You may be wondering how the heck you should know which perldoc pages you should be reading for a given problem. An article on Perlmonks titled How to RTFM provides a nice introduction to the documentation and how to use it.
The great thing, is if you get stuck, you have some code to share when you ask for help.
If I understand correctly your first file is your key order file, and the remaining files each contain a byte per key in the same order. You want a composite file of those keys with each of their data bytes listed together.
In this case you should open all the files simultaneously. Read one key from the key order file, read one byte from each of the data files. Output everything as you read it to you final file. Repeat for each key.
It looks like you have many foo_files that have 1 line in them, something like:
1110000000
Which stands for
fooa=1
foob=1
fooc=1
food=0
fooe=0
foof=0
foog=0
fooh=0
fooi=0
fooj=0
And it looks like your foo_res is just a summation of those values? In that case, you don't need a hash of arrays, but just a hash.
my #foo_files = (); #NOT SURE HOW YOU POPULATE THIS ONE
my #foo_keys = qw(a b c d e f g h i j);
my %foo_hash = map{ ( $_, 0 ) } #foo_keys; # initialize hash
foreach my $foo_file ( #foo_files ) {
open( my $FOO, "<", $foo_file) || die "Cannot open $foo_file\n";
my $line = <$FOO>;
close( $FOO );
chomp($line);
my #foo_values = split(//, $line);
foreach my $indx ( 0 .. $#foo_keys ) {
last if ( ! $foo_values[ $indx ] ); # or some kind of error checking if the input file doesn't have all the values
$foo_hash{ $foo_keys[$indx] } += $foo_values[ $indx ];
}
}
It's pretty hard to understand what you are asking for, but maybe this helps?
Your specifications aren't clear. You couldn't have a "lots of other files" named FOO_files.txt, because it's only one name. So I'm going to take this as the files-with-data + filelist pattern. In this case, there are files named FOO*.txt, each containing "[01]+\n".
Thus the idea is to process all the files in the filelist file and to insert them all into a result file FOO_RES.csv, comma-delimited.
use strict;
use warnings;
use English qw<$OS_ERROR>;
use IO::Handle;
open my $foos, '<', 'FOO_1.txt'
or die "I'm dead: $OS_ERROR";
#ARGV = sort map { chomp; "$_.txt" } <$foos>;
$foos->close;
open my $foo_csv, '>', 'FOO_RES.csv'
or die "I'm dead: $OS_ERROR";
while ( my $line = <> ) {
my ( $foo_name ) = ( $ARGV =~ /(.*)\.txt$/ );
$foo_csv->print( join( ',', $foo_name, split //, $line ), "\n" );
}
$foo_csv->close;
You don't really need to use a hash. My Perl is a little rusty, so syntax may be off a bit, but basically do this:
open KEYFILE , "foo_1.txt" or die "cannot open foo_1 for writing";
open VALFILE , "foo_files.txt" or die "cannot open foo_files for writing";
open OUTFILE , ">foo_out.txt"or die "cannot open foo_out for writing";
my %output;
while (<KEYFILE>) {
my $key = $_;
my $val = <VALFILE>;
my $arrVal = split(//,$val);
$output{$key} = $arrVal;
print OUTFILE $key."," . join(",", $arrVal)
}
Edit: Syntax check OK
Comment by Sinan: #Byron, it really bothers me that your first sentence says the OP does not need a hash yet your code has %output which seems to serve no purpose. For reference, the following is a less verbose way of doing the same thing.
#!/usr/bin/perl
use strict;
use warnings;
use autodie qw(:file :io);
open my $KEYFILE, '<', "foo_1.txt";
open my $VALFILE, '<', "foo_files.txt";
open my $OUTFILE, '>', "foo_out.txt";
while (my $key = <$KEYFILE>) {
chomp $key;
print $OUTFILE join(q{,}, $key, split //, <$VALFILE> ), "\n";
}
__END__