Split a file between two specific values and put it in a array in perl - perl

Sample file:
### Current GPS Coordinates ###
Just In ... : unknown
### Power Source ###
2
### TPM Status ###
0
### Boot Version ###
GD35 1.1.0.12 - built 14:22:56, Jul 10 232323
I want split above file in to arrays like below:
#Current0_PS_Coordinates should be like below
### Current GPS Coordinates ###
Just In ... : unknown
I like to do it in Perl any help? (current program added from comment)
#!/usr/local/lib/perl/5.14.2 -w
my #lines;
my $file;
my $lines;
my $i;
#chomp $i;
$file = 'test.txt'; # Name the file
open(INFO, $file); # Open the file
#lines = <INFO>; # Read it into an array
close(INFO); # Close the file
foreach my $line (#lines) { print "$line"; }

Read the file line by line. Use chomp to remove trailing newlines
If the input line matches this regexp /^### (.*) ###/ then you have the name of an "array" in $1
It is possible to make named variables like #Current0_PS_Coordinates from these matches.
But it's better to use them as hash keys and then store the data in a hash that has arrays as it's values
So put the $1 from the match in "$lastmatch" and start an empty array referred to by a hash like this $items{$lastmatch}=[] for now and read some more
If the input line does not match the "name of an array" regexp given above and if it is not empty then we assume that it is a value for the last match found. So it can be stuffed in the current array like this push #$items{$lastmatch}, $line
Once you've done this all the data will be available in the %items hash
See the perldata, perlre, perldsc and perllol documentation pages for more details

A good place to start would be buying the book Learning Perl (O'Reilly). Seriously, it's a great book with interesting exercises at the end of each chapter. Very easy to learn.
1). Why do you have "my #lines" then "my $lines" lower down? I don't even think you're allowed to do that because scalars and arrays are the same variable but different context. For example, #list can be ('a', 'b', 'c') but calling $list would return 3, the number of items in that list.
2). What is "my $i"? Even if you're just writing down thoughts, try to use descriptive names. It'll make the code a lot easier to piece together.
3). Why is there a commented out "chomp $i"? Where were you going with that thought?
4). Try to use the 3 argument form of open. This will ensure you don't accidentally destroy files you're reading from:
open INFO, "<", $file;
If you're not sure where to start this problem, Vorsprung's answer probably won't mean anything. Regex and variables like $1 are things you'll need to read a book to understand.

Related

Perl, find a match and read next line in perl

I would like to use
myscript.pl targetfolder/*
to read some number from ASCII files.
myscript.pl
#list = <#ARGV>;
# Is the whole file or only 1st line is loaded?
foreach $file ( #list ) {
open (F, $file);
}
# is this correct to judge if there is still file to load?
while ( <F> ) {
match_replace()
}
sub match_replace {
# if I want to read the 5th line in downward, how to do that?
# if I would like to read multi lines in multi array[row],
# how to do that?
if ( /^\sName\s+/ ) {
$name = $1;
}
}
I would recommend a thorough read of perlintro - it will give you a lot of the information you need. Additional comments:
Always use strict and warnings. The first will enforce some good coding practices (like for example declaring variables), the second will inform you about potential mistakes. For example, one warning produced by the code you showed would be readline() on unopened filehandle F, giving you the hint that F is not open at that point (more on that below).
#list = <#ARGV>;: This is a bit tricky, I wouldn't recommend it - you're essentially using glob, and expanding targetfolder/* is something your shell should be doing, and if you're on Windows, I'd recommend Win32::Autoglob instead of doing it manually.
foreach ... { open ... }: You're not doing anything with the files once you've opened them - the loop to read from the files needs to be inside the foreach.
"Is the whole file or only 1st line is loaded?" open doesn't read anything from the file, it just opens it and provides a filehandle (which you've named F) that you then need to read from.
I'd strongly recommend you use the more modern three-argument form of open and check it for errors, as well as use lexical filehandles since their scope is not global, as in open my $fh, '<', $file or die "$file: $!";.
"is this correct to judge if there is still file to load?" Yes, while (<$filehandle>) is a good way to read a file line-by-line, and the loop will end when everything has been read from the file. You may want to use the more explicit form while (my $line = <$filehandle>), so that your variable has a name, instead of the default $_ variable - it does make the code a bit more verbose, but if you're just starting out that may be a good thing.
match_replace(): You're not passing any parameters to the sub. Even though this code might still "work", it's passing the current line to the sub through the global $_ variable, which is not a good practice because it will be confusing and error-prone once the script starts getting longer.
if (/^\sName\s+/){$name = $1;}: Since you've named the sub match_replace, I'm guessing you want to do a search-and-replace operation. In Perl, that's called s/search/replacement/, and you can read about it in perlrequick and perlretut. As for the code you've shown, you're using $1, but you don't have any "capture groups" ((...)) in your regular expression - you can read about that in those two links as well.
"if I want to read the 5th line in downward , how to do that ?" As always in Perl, There Is More Than One Way To Do It (TIMTOWTDI). One way is with the range operator .. - you can skip the first through fourth lines by saying next if 1..4; at the beginning of the while loop, this will test those line numbers against the special $. variable that keeps track of the most recently read line number.
"and if I would like to read multi lines in multi array[row], how to do that ?" One way is to use push to add the current line to the end of an array. Since keeping the lines of a file in an array can use up more memory, especially with large files, I'd strongly recommend making sure you think through the algorithm you want to use here. You haven't explained why you would want to keep things in an array, so I can't be more specific here.
So, having said all that, here's how I might have written that code. I've added some debugging code using Data::Dumper - it's always helpful to see the data that your script is working with.
#!/usr/bin/env perl
use warnings;
use strict;
use Data::Dumper; # for debugging
$Data::Dumper::Useqq=1;
for my $file (#ARGV) {
print Dumper($file); # debug
open my $fh, '<', $file or die "$file: $!";
while (my $line = <$fh>) {
next if 1..4;
chomp($line); # remove line ending
match_replace($line);
}
close $fh;
}
sub match_replace {
my ($line) = #_; # get argument(s) to sub
my $name;
if ( $line =~ /^\sName\s+(.*)$/ ) {
$name = $1;
}
print Data::Dumper->Dump([$line,$name],['line','name']); # debug
# ... do more here ...
}
The above code is explicitly looping over #ARGV and opening each file, and I did say above that more verbose code can be helpful in understanding what's going on. I just wanted to point out a nice feature of Perl, the "magic" <> operator (discussed in perlop under "I/O Operators"), which will automatically open the files in #ARGV and read lines from them. (There's just one small thing, if I want to use the $. variable and have it count the lines per file, I need to use the continue block I've shown below, this is explained in eof.) This would be a more "idiomatic" way of writing that first loop:
while (<>) { # reads line into $_
next if 1..4;
chomp; # automatically uses $_ variable
match_replace($_);
} continue { close ARGV if eof } # needed for $. (and range operator)

Defining Hash Values and Keys and Using Multiple Different Files

I am struggling with writing a Perl program for several tasks. I have tried really hard to review all errors since I am a beginner and want to understand my mistakes, but I am failing. Hopefully, my description of the tasks and my deficient program so far will not be confusing.
In my current directory, I have a variable number of “.txt.” files. (I can have 4, 5, 8, or any number of files. However, I don’t think I will get more that 17 files.) The format of the “.txt” files is the same. There are six columns, which are separated by white space. I only care about two columns in these files: the second column, which is the coral reef regionID (made up of letters and numbers), and the fifth column, which is the p-value. The number of rows in each file is undetermined. What I need to do is find all the common regionIDs in all .txt files and print these common regions to an outfile. However, before printing, I must sort them.
The following is my program so far, but I have received error messages, which I have included after the program. Thus, my definitions of variables are the major problems. I really appreciate any suggestions for writing the program and thank you for your patience with a beginner like me.
UPDATE: I have declared the variables as suggested. After reviewing my program, two syntax errors appear.
syntax error at oreg.pl line 19, near "$hash{"
syntax error at oreg.pl line 23, near "}"
Execution of oreg.pl aborted due to compilation errors.
Here is an excerpt of the edited program that includes where said errors are.
#!/user/bin/perl
use strict;
use warnings;
# Trying to read files in #txtfiles for reading into hash
foreach my $file (#txtfiles) {
open(FH,"<$file") or die "Can't open $file\n";
while(chomp(my $line = <FH>)){
$line =~ s/^\s+//;
my #IDp = split(/\s+/, $line); # removing whitespace
my $i = 0;
# trying to define values and keys in terms of array elements in IDp
my $value = my $hash{$IDp[$i][1]};
$value .= "$IDp[$i][4]"; # confused here at format to append p-values
$i++;
}
}
close(FH);
These are past errors:
Global symbol "$file" requires explicit package name at oreg.pl line 13.
Global symbol "$line" requires explicit package name at oreg.pl line 16.
#[And many more just like that...]
Execution of oreg.pl aborted due to compilation errors.
You didn't declare $file.
foreach my $file (#txtfiles) {
You didn't declare $line.
while(chomp(my $line = <FH>)){
etc.
use strict;
use warnings;
my %region;
foreach my $file (#txtfiles) {
open my $FH, "<", $file or die "Can't open $file \n";
while (my $line = <$FH>) {
chomp($line);
my #values = split /\s+/, $line;
my $regionID = $values[1]; # 2nd column, per your notes
my $pvalue = $values[4]; # 5th column, per your notes
$region{$regionID} //= []; # Inits this value in the hash to an empty arrayref if undefined
push #{$region{$regionID}}, $pvalue;
}
}
# Now sort and print using %region as needed
At the end of this code, %region is a hash where the keys are the region IDs and the values are array references containing the various pvalues.
Here's a few snippets that may help you with next steps:
keys %regions will give you a list of region id values.
my #pvals = #{$regions{SomeRegionID}} will give you the list of pvalues for SomeRegionID
$regions{SomeRegionID}->[0] will give you the first pvalue for that region.
You may want to check out Data::Printer or Data::Dumper - they are CPAN modules that will let you easily print out your data structure, which might help you understand what's going on in your code.

A Perl script to process a CSV file, aggregating properties spread over multiple records

Sorry for the vague question, I'm struggling to think how to better word it!
I have a CSV file that looks a little like this, only a lot bigger:
550672,1
656372,1
766153,1
550672,2
656372,2
868194,2
766151,2
550672,3
868179,3
868194,3
550672,4
766153,4
The values in the first column are a ID numbers and the second column could be described as a property (for want of a better word...). The ID number 550672 has properties 1,2,3,4. Can anyone point me towards how I can begin solving how to produce strings such as that for all the ID numbers? My ideal output would be a new csv file which looks something like:
550672,1;2;3;4
656372,1;2
766153,1;4
etc.
I am very much a Perl baby (only 3 days old!) so would really appreciate direction rather than an outright solution, I'm determined to learn this stuff even if it takes me the rest of my days! I have tried to investigate it myself as best as I can, although I think I've been encumbered by not really knowing what to really search for. I am able to read in and parse CSV files (I even got so far as removing duplicate values!) but that is really where it drops off for me. Any help would be greatly appreciated!
I think it is best if I offer you a working program rather than a few hints. Hints can only take you so far, and if you take the time to understand this code it will give you a good learning experience
It is best to use Text::CSV whenever you are processing CSV data as all the debugging has already been done for you
use strict;
use warnings;
use Text::CSV;
my $csv = Text::CSV->new;
open my $fh, '<', 'data.txt' or die $!;
my %data;
while (my $line = <$fh>) {
$csv->parse($line) or die "Invalid data line";
my ($key, $val) = $csv->fields;
push #{ $data{$key} }, $val
}
for my $id (sort keys %data) {
printf "%s,%s\n", $id, join ';', #{ $data{$id} };
}
output
550672,1;2;3;4
656372,1;2
766151,2
766153,1;4
868179,3
868194,2;3
Firstly props for seeking an approach not a solution.
As you've probably already found with perl, There Is More Than One Way To Do It.
The approach I would take would be;
use strict; # will save you big time in the long run
my %ids # Use a hash table with the id as the key to accumulate the properties
open a file handle on csv or die
while (read another line from the file handle){
split line into ID and property variable # google the split function
append new property to existing properties for this id in the hash table # If it doesn't exist already, it will be created
}
foreach my $key (keys %ids) {
deduplicate properties
print/display/do whatever you need to do with the result
}
This approach means you will need to iterate over the whole set twice (once in memory), so depending on the size of the dataset that may be a problem.
A more sophisticated approach would be to use a hashtable of hashtables to do the de duplication in the intial step, but depending on how quickly you want/need to get it working, that may not be worthwhile in the first instance.
Check out
this question
for a discussion on how to do the deduplication.
Well, open the file as stdin in perl, assume each row is of two columns, then iterate over all lines using left column as hash identifier, and gathering right column into an array pointed by a hash key. At the end of input file you'll get a hash of arrays, so iterate over it, printing a hash key and assigned array elements separated by ";" or any other sign you wish.
and here you go
dtpwmbp:~ pwadas$ cat input.txt
550672,1
656372,1
766153,1
550672,2
656372,2
868194,2
766151,2
550672,3
868179,3
868194,3
550672,4
766153,4
dtpwmbp:~ pwadas$ cat bb2.pl
#!/opt/local/bin/perl
my %hash;
while (<>)
{
chomp;
my($key, $value) = split /,/;
push #{$hash{$key}} , $value ;
}
foreach my $key (sort keys %hash)
{
print $key . "," . join(";", #{$hash{$key}} ) . "\n" ;
}
dtpwmbp:~ pwadas$ cat input.txt | perl -f bb2.pl
550672,1;2;3;4
656372,1;2
766151,2
766153,1;4
868179,3
868194,2;3
dtpwmbp:~ pwadas$
perl -F"," -ane 'chomp($F[1]);$X{$F[0]}=$X{$F[0]}.";".$F[1];if(eof){for(keys %X){$X{$_}=~s/;//;print $_.",".$X{$_}."\n"}}'
Another (not perl) way which incidentally is shorter and more elegant:
#!/opt/local/bin/gawk -f
BEGIN {FS=OFS=",";}
NF > 0 { IDs[$1]=IDs[$1] ";" $2; }
END { for (i in IDs) print i, substr(IDs[i], 2); }
The first line (after specifying the interpreter) sets the input FIELD SEPARATOR and the OUTPUT FIELD SEPARATOR to the comma. The second line checks of we have more than zero fields and if you do it makes the ID ($1) number the key and $2 the value. You do this for all lines.
The END statement will print these pairs out in an unspecified order. If you want to sort them you have to option of asorti gnu awk function or connecting the output of this snippet with a pipe to sort -t, -k1n,1n.

How can I combine files into one CSV file?

If I have one file FOO_1.txt that contains:
FOOA
FOOB
FOOC
FOOD
...
and a lots of other files FOO_files.txt. Each of them contains:
1110000000...
one line that contain 0 or 1 as the number of FOO1 values (fooa,foob, ...)
Now I want to combine them to one file FOO_RES.csv that will have the following format:
FOOA,1,0,0,0,0,0,0...
FOOB,1,0,0,0,0,0,0...
FOOC,1,0,0,0,1,0,0...
FOOD,0,0,0,0,0,0,0...
...
What is the simple & elegant way to conduct that
(with hash & arrays -> $hash{$key} = \#data ) ?
Thanks a lot for any help !
Yohad
If you can't describe a your data and your desired result clearly, there is no way that you will be able to code it--taking on a simple project is a good way to get started using a new language.
Allow me to present a simple method you can use to churn out code in any language, whether you know it or not. This method only works for smallish projects. You'll need to actually plan ahead for larger projects.
How to write a program:
Open up your text editor and write down what data you have. Make each line a comment
Describe your desired results.
Start describing the steps needed to change your data into the desired form.
Numbers 1 & 2 completed:
#!/usr/bin perl
use strict;
use warnings;
# Read data from multiple files and combine it into one file.
# Source files:
# Field definitions: has a list of field names, one per line.
# Data files:
# * Each data file has a string of digits.
# * There is a one-to-one relationship between the digits in the data file and the fields in the field defs file.
#
# Results File:
# * The results file is a CSV file.
# * Each field will have one row in the CSV file.
# * The first column will contain the name of the field represented by the row.
# * Subsequent values in the row will be derived from the data files.
# * The order of subsequent fields will be based on the order files are read.
# * However, each column (2-X) must represent the data from one data file.
Now that you know what you have, and where you need to go, you can flesh out what the program needs to do to get you there - this is step 3:
You know you need to have the list of fields, so get that first:
# Get a list of fields.
# Read the field definitions file into an array.
Since it is easiest to write CSV in a row oriented fashion, you will need to process all your files before generating each row. So you'll need someplace to store the data.
# Create a variable to store the data structure.
Now we read the data files:
# Get a list of data files to parse
# Iterate over list
# For each data file:
# Read the string of digits.
# Assign each digit to its field.
# Store data for later use.
We've got all the data in memory, now write the output:
# Write the CSV file.
# Open a file handle.
# Iterate over list of fields
# For each field
# Get field name and list of values.
# Create a string - comma separated string with field name and values
# Write string to file handle
# close file handle.
Now you can start converting comments into code. You could have anywhere from 1 to 100 lines of code for each comment. You may find that something you need to do is very complex and you don't want to take it on at the moment. Make a dummy subroutine to handle the complex task, and ignore it until you have everything else done. Now you can solve that complex, thorny sub-problem on it's own.
Since you are just learning Perl, you'll need to hit the docs to find out how to do each of the subtasks represented by the comments you've written. The best resource for this kind of work is the list of functions by category in perlfunc. The Perl syntax guide will come in handy too. Since you'll need to work with a complex data structure, you'll also want to read from the Data Structures Cookbook.
You may be wondering how the heck you should know which perldoc pages you should be reading for a given problem. An article on Perlmonks titled How to RTFM provides a nice introduction to the documentation and how to use it.
The great thing, is if you get stuck, you have some code to share when you ask for help.
If I understand correctly your first file is your key order file, and the remaining files each contain a byte per key in the same order. You want a composite file of those keys with each of their data bytes listed together.
In this case you should open all the files simultaneously. Read one key from the key order file, read one byte from each of the data files. Output everything as you read it to you final file. Repeat for each key.
It looks like you have many foo_files that have 1 line in them, something like:
1110000000
Which stands for
fooa=1
foob=1
fooc=1
food=0
fooe=0
foof=0
foog=0
fooh=0
fooi=0
fooj=0
And it looks like your foo_res is just a summation of those values? In that case, you don't need a hash of arrays, but just a hash.
my #foo_files = (); #NOT SURE HOW YOU POPULATE THIS ONE
my #foo_keys = qw(a b c d e f g h i j);
my %foo_hash = map{ ( $_, 0 ) } #foo_keys; # initialize hash
foreach my $foo_file ( #foo_files ) {
open( my $FOO, "<", $foo_file) || die "Cannot open $foo_file\n";
my $line = <$FOO>;
close( $FOO );
chomp($line);
my #foo_values = split(//, $line);
foreach my $indx ( 0 .. $#foo_keys ) {
last if ( ! $foo_values[ $indx ] ); # or some kind of error checking if the input file doesn't have all the values
$foo_hash{ $foo_keys[$indx] } += $foo_values[ $indx ];
}
}
It's pretty hard to understand what you are asking for, but maybe this helps?
Your specifications aren't clear. You couldn't have a "lots of other files" named FOO_files.txt, because it's only one name. So I'm going to take this as the files-with-data + filelist pattern. In this case, there are files named FOO*.txt, each containing "[01]+\n".
Thus the idea is to process all the files in the filelist file and to insert them all into a result file FOO_RES.csv, comma-delimited.
use strict;
use warnings;
use English qw<$OS_ERROR>;
use IO::Handle;
open my $foos, '<', 'FOO_1.txt'
or die "I'm dead: $OS_ERROR";
#ARGV = sort map { chomp; "$_.txt" } <$foos>;
$foos->close;
open my $foo_csv, '>', 'FOO_RES.csv'
or die "I'm dead: $OS_ERROR";
while ( my $line = <> ) {
my ( $foo_name ) = ( $ARGV =~ /(.*)\.txt$/ );
$foo_csv->print( join( ',', $foo_name, split //, $line ), "\n" );
}
$foo_csv->close;
You don't really need to use a hash. My Perl is a little rusty, so syntax may be off a bit, but basically do this:
open KEYFILE , "foo_1.txt" or die "cannot open foo_1 for writing";
open VALFILE , "foo_files.txt" or die "cannot open foo_files for writing";
open OUTFILE , ">foo_out.txt"or die "cannot open foo_out for writing";
my %output;
while (<KEYFILE>) {
my $key = $_;
my $val = <VALFILE>;
my $arrVal = split(//,$val);
$output{$key} = $arrVal;
print OUTFILE $key."," . join(",", $arrVal)
}
Edit: Syntax check OK
Comment by Sinan: #Byron, it really bothers me that your first sentence says the OP does not need a hash yet your code has %output which seems to serve no purpose. For reference, the following is a less verbose way of doing the same thing.
#!/usr/bin/perl
use strict;
use warnings;
use autodie qw(:file :io);
open my $KEYFILE, '<', "foo_1.txt";
open my $VALFILE, '<', "foo_files.txt";
open my $OUTFILE, '>', "foo_out.txt";
while (my $key = <$KEYFILE>) {
chomp $key;
print $OUTFILE join(q{,}, $key, split //, <$VALFILE> ), "\n";
}
__END__

Searching/reading another file from awk based on current file's contents, is it possible?

I'm processing a huge file with (GNU) awk, (other available tools are: Linux shell tools, some old (>5.0) version of Perl, but can't install modules).
My problem: if some field1, field2, field3 contain X, Y, Z I must search for a file in another directory which contains field4, and field5 on one line, and insert some data from the found file to the current output.
E.g.:
Actual file line:
f1 f2 f3 f4 f5
X Y Z A B
Now I need to search for another file (in another directory), which contains e.g.
f1 f2 f3 f4
A U B W
And write to STDOUT $0 from the original file, and f2 and f3 from the found file, then process the next line of the original file.
Is it possible to do it with awk?
Let me start out by saying that your problem description isn't really that helpful. Next time, please just be more specific: You might be missing out on much better solutions.
So from your description, I understand you have two files which contain whitespace-separated data. In the first file, you want to match the first three columns against some search pattern. If found, you want to find all lines in another file which contain the fourth and and fifth column of the matching line in the first file. From those lines, you need to extract the second and third column and then print the first column of the first file and the second and third from the second file. Okay, here goes:
#!/usr/bin/env perl -nwa
use strict;
use File::Find 'find';
my #search = qw(X Y Z);
# if you know in advance that the otherfile isn't
# huge, you can cache it in memory as an optimization.
# with any more columns, you want a loop here:
if ($F[0] eq $search[0]
and $F[1] eq $search[1]
and $F[2] eq $search[2])
{
my #files;
find(sub {
return if not -f $_;
# verbatim search for the columns in the file name.
# I'm still not sure what your file-search criteria are, though.
push #files, $File::Find::name if /\Q$F[3]\E/ and /\Q$F[4]\E/;
# alternatively search for the combination:
#push #files, $File::Find::name if /\Q$F[3]\E.*\Q$F[4]\E/;
# or search *all* files in the search path?
#push #files, $File::Find::name;
}, '/search/path'
)
foreach my $file (#files) {
open my $fh, '<', $file or die "Can't open file '$file': $!";
while (defined($_ = <$fh>)) {
chomp;
# order of fields doesn't matter per your requirement.
my #cols = split ' ', $_;
my %seen = map {($_=>1)} #cols;
if ($seen{$F[3]} and $seen{$F[4]}) {
print join(' ', $F[0], #cols[1,2]), "\n";
}
}
close $fh;
}
} # end if matching line
Unlike another poster's solution which contains lots of system calls, this doesn't fall back to the shell at all and thus should be plenty fast.
This is the type of work that got me to move from awk to perl in the first place. If you are going to accomplish this, you may actually find it easier to create a shell script that creates awk script(s) to query and then update in separate steps.
(I've written such a beast for reading/updating windows-ini-style files - it's ugly. I wish I could have used perl.)
I often see the restriction "I can't use any Perl modules", and when it's not a homework question, it's often just due to a lack of information. Yes, even you can use CPAN contains the instructions on how to install CPAN modules locally without having root privileges. Another alternative is just to take the source code of a CPAN module and paste it into your program.
None of this helps if there are other, unstated, restrictions, like lack of disk space that prevent installation of (too many) additional files.
This seems to work for some test files I set up matching your examples. Involving perl in this manner (interposed with grep) is probably going to hurt the performance a great deal, though...
## perl code to do some dirty work
for my $line (`grep 'X Y Z' myhugefile`) {
chomp $line;
my ($a, $b, $c, $d, $e) = split(/ /,$line);
my $cmd = 'grep -P "' . $d . ' .+? ' . $e .'" otherfile';
for my $from_otherfile (`$cmd`) {
chomp $from_otherfile;
my ($oa, $ob, $oc, $od) = split(/ /,$from_otherfile);
print "$a $ob $oc\n";
}
}
EDIT: Use tsee's solution (above), it's much more well-thought-out.