Extracting parts of files by separators - perl

I have a file that is in the following format:
Preamble
---------------------
Section 1
...
---------------------
---------------------
Section 2
...
---------------------
---------------------
Section 3
...
---------------------
Afterwords
And I want to extract each section by the separator so that I'll have a result in:
file0:
Section 1
...
file1:
Section 2
...
file2:
Section 3
...
...
Is there a simple way to do this? Thanks.

[Update] Using chomp and $_ makes this even shorter.
This should do it:
If your input record separator is a sequence of 21 -'s, this is easy with perl -ne:
perl -ne 'BEGIN{ $/=("-"x21)."\n"; $i=0; }
do { open F, ">file".($i++);
chomp;
print F;
close F;
} if /^Section/' yourfile.txt
should work, and create files file0.. fileN.
Explanation
Easier to explain as a stand-alone Perl-script perhaps?
$/=("-"x21)."\n"; # Set the input-record-separator to "-" x 21 times
my $i = 0; # output file number
open IN, "<yourfile.txt" or die "$!";
while (<IN>) { # Each "record" will be available as $_
do { open F, ">file".($i++);
chomp; # remove the trailing "---..."
print F; # write the record to the file
close F; #
} if /^Section/ # do all this only it this is a Section
}
Perl's awk lineage was useful here, so let's show an awk version for comparion:
awk 'BEGIN{RS="\n-+\n";i=0}
/Section/ {chomp; print > "file_"(i++)".txt"
}' yourfile.txt
Not too bad compared to the perl version, it's actually shorter. The $/ in Perl is the RS variable in awk. Awk has an upper hand here: RS may be a regular expression!

You can do with shell too :
#!/bin/bash
i=0
while read line ; do
#If the line contain "Section " followed by a
#digit the next lines have to be printed
echo "$line"|egrep -q "Section [0-9]+"
if [ $? -eq 0 ] ; then
toprint=true
i=$(($i + 1))
touch file$i
fi
#If the line contain "--------------------"
#the next lines doesn't have to be printed
echo "$line"|egrep -q "[-]{20}"
if [ $? -eq 0 ] ; then
toprint=false
fi
#Print the line if needed
if $toprint ; then
echo $line >> file$i
fi
done < sections.txt

Here's what you're looking for:
awk '/^-{21}$/ { f++; next } f%2!=0 { print > "file" (f-1)/2 ".txt" }' file
Results:
Contents of file0.txt:
Section 1
...
Contents of file1.txt:
Section 2
...
Contents of file2.txt:
Section 3
...
As you can see the above filenames are 'zero' indexed. If you'd like filenames 'one' indexed, simply change (f-1)/2 to (f+1)/2. HTH.

Given your file's format, here's one option:
use strict;
use warnings;
my $fh;
my $sep = '-' x 21;
while (<>) {
if (/^Section\s+(\d+)/) {
open $fh, '>', 'file' . ( $1 - 1 ) . '.txt' or die $!;
}
print $fh $_ if defined $fh and !/^$sep/;
}
On your data, creates file0.txt .. file2.txt with file0.txt containing:
Section 1
...

Related

Delete the last line of a file using perl

sed '$d' $file;
Using this command doesn't seem to work, as $ is a reserved symbol in Perl.
Don't know why are you using sed into Perl. Perl itself have standard module to delete last line from a file.
Use the standard (as of v5.8) Tie::File module and delete the last element from the tied array:
use Tie::File;
tie #lines, Tie::File, $file or die "can't update $file: $!";
delete $lines[-1];
Last line only
The closest syntax seem to be:
perl -ne 'print unless eof()'
This will act like sed, ie: without the requirement of reading the whole file into memory and could work with FIFO like STDIN.
See:
perl -ne 'print unless eof()' < <(seq 1 3)
1
2
or maybe:
perl -pe '$_=undef if eof()' < <(seq 1 3)
1
2
First and last lines
perl -pe '
BEGIN {
chomp(my $first= <>);
print "Something special with $first\n";
};
do {
chomp;
print "Other speciality with $_\n";
undef $_;
} if eof();
' < <(seq 1 5)
will render:
Something special with 1
2
3
4
Other speciality with 5
Shortest: first and last line:
perl -pe 's/^/Something... / if$.==1||eof' < <(seq 1 5)
will render:
Something... 1
2
3
4
Something... 5
Try this:
perl -pe 'BEGIN{$s=join"|",qw|1 3 7 21|;};
if ($.=~/^($s)$/||eof){s/^/---/}else{s/$/.../}' < <(seq 1 22)
... something like sed command:
sed '1ba;3ba;7ba;21ba;$ba;s/$/.../;bb;:a;s/^/---/;:b' < <(seq 1 22)
In a script file:
#!/usr/bin/perl -w
use strict;
sub something {
chomp;
print "Something special with $_.\n";
}
$_=<>;
something;
while (<>) {
if (eof) { something; }
else { print; };
}
will give:
/tmp/script.pl < <(seq 1 5)
Something special with 1.
2
3
4
Something special with 5.
Hope you are trying to execute the 'sed' command from the middle of a perl script. I would recommend not to use this approach because it will work only in non-windows systems. Below is a perl'ish approach in which you can process first and last lines only rather than spending effort to delete file contents. Thank you.
Assuming "myfile.txt" as the input file:
open (FH, "<", "myfile.txt") or die "Unable to open \"myfile.txt\": $! \n";
$i = 1;
while(<FH>){
next if ($i++ == 1 or eof);
# Process other lines
print "\nProcessing Line: $_";
}
close (FH);
print "\n";
1;
myfile.txt -
# First line
This is the beginning of comment free file.
Hope this is also the line to get processed!
# last line
Result -
Processing Line: This is the beginning of comment free file.
Processing Line: Hope this is also the line to get processed!

Append a new column to file in perl

I've got the follow function inside a perl script:
sub fileSize {
my $file = shift;
my $opt = shift;
open (FILE, $file) or die "Could not open file $file: $!";
$/ = ">";
my $junk = <FILE>;
my $g_size = 0;
while ( my $rec = <FILE> ) {
chomp $rec;
my ($name, #seqLines) = split /\n/, $rec;
my $sec = join('',#seqLines);
$g_size+=length($sec);
if ( $opt == 1 ) {
open TMP, ">>", "tmp" or die "Could not open chr_sizes.log: $!\n";
print TMP "$name\t", length($sec), "\n";
}
}
if ( $opt == 0 ) {
PrintLog( "file_size: $g_size", 0 );
}
else {
print TMP "file_size: $g_size\n";
close TMP;
}
$/ = "\n";
close FILE;
}
Input file format:
>one
AAAAA
>two
BBB
>three
C
I have several input files with that format. The line beginning with ">" is the same but the other lines can be of different length. The output of the function with only one file is:
one 5
two 3
three 1
I want to execute the function in a loop with this for each file:
foreach my $file ( #refs ) {
fileSize( $file, 1 );
}
When running the next iteration, let's say with this file:
>one
AAAAABB
>two
BBBVFVF
>three
CS
I'd like to obtain this output:
one 5 7
two 3 7
three 1 2
How can I modify the function or modify the script to get this? As can be seen, my function append the text to the file
Thanks!
I've left out your options and the file IO operations and have concentrated on showing a way to do this with an array of arrays from the command line. I hope it helps. I'll leave wiring it up to your own script and subroutines mostly up to to you :-)
Running this one liner against your first data file:
perl -lne ' $name = s/>//r if /^>/ ;
push #strings , [$name, length $_] if !/^>/ ;
END { print "#{$_ } " for #strings }' datafile1.txt
gives this output:
one 5
two 3
three 1
Substituting the second version or instance of the data file (i.e where record one contains AAAAABB) gives the expected results as well.
one 7
two 7
three 2
In your script above, you save to an output file in this format. So, to append columns to each row in your output file, we can just munge each of your data files in the same way (with any luck this might mean things can be converted into a function that will work in a foreach loop). If we save the transformed data to be output into an array of arrays (AoA), then we can just push the length values we get for each data file string onto the corresponding anonymous array element and then print out the array. VoilĂ ! Now let's hope it works ;-)
You might want to install Data::Printer which can be used from the command line as -MDDP to visualize data structures.
First - run the above script and redirect the output to a file with > /tmp/output.txt
Next - try this longish one-liner that uses DDP and p to show the structure of the array we create:
perl -MDDP -lne 'BEGIN{ local #ARGV=shift;
#tmp = map { [split] } <>; p #tmp }
$name = s/>//r if /^>/ ;
push #out , [ $name, length $_ ] if !/^>/ ;
END{ p #out ; }' /tmp/output.txt datafile2.txt `
In the BEGIN block we local-ize #ARGV ; shift off the first file (our version of your TMP file) - {local #ARGV=shift} is almost a perl idiom for handling multiple input files; we then split it inside an anonymous array constructor ([]) and map { } that into the #tmp array which we display with DDP's p() function. Once we are out of the BEGIN block, the implicit while (<>){ ... } that we get with perl's -n command line switch takes over and reads in the remaining file from #ARGV ; we process lines starting with > - stripping the leading character and assigning the string that follows to the $name variable; the while continues and we push $name and the length of any line that does not start with > (if !/^>/) wrapped as elements of an anonymous array [] into the #out array which we display with p() as well (in the END{} block so it doesn't print inside our implicit while() loop). Phew!!
See the AoA that results as a gist #Github.
Finally - building on that, and now we have munged things nicely - we can change a few things in our END{...} block (add a nested for loop to push things around) and put this all together to produce the output we want.
This one liner:
perl -MDDP -lne 'BEGIN{ local #ARGV=shift; #tmp = map {[split]} <>; }
$name = s/>//r if /^>/ ; push #out, [ $name, length $_ ] if !/^>/ ;
END{ foreach $row (0..$#tmp) { push $tmp[$row] , $out[$row][-1]} ;
print "#$_" for #tmp }' output.txt datafile2.txt
produces:
one 5 7
two 3 7
three 1 2
We'll have to convert that into a script :-)
The script consists of three rather wordy subroutines that reads the log file; parses the datafile ; merges them. We run them in order. The first one checks to see if there is an existing log and creates one and then does an exit to skip any further parsing/merging steps.
You should be able to wrap them in a loop of some kind that feeds files to the subroutines from an array instead of fetching them from STDIN. One caution - I'm using IO::All because it's fun and easy!
use 5.14.0 ;
use IO::All;
my #file = io(shift)->slurp ;
my $log = "output.txt" ;
&readlog;
&parsedatafile;
&mergetolog;
####### subs #######
sub readlog {
if (! -R $log) {
print "creating first log entry\n";
my #newlog = &parsedatafile ;
open(my $fh, '>', $log) or die "I CAN HAZ WHA????" ;
print $fh "#$_ \n" for #newlog ;
exit;
}
else {
map { [split] } io($log)->slurp ;
}
}
sub parsedatafile {
my (#out, $name) ;
while (<#file>) {
chomp ;
$name = s/>//r if /^>/;
push #out, [$name, length $_] if !/^>/ ;
}
#out;
}
sub mergetolog {
my #tmp = readlog ;
my #data = parsedatafile ;
foreach my $row (0 .. $#tmp) {
push $tmp[$row], $data[$row][-1]
}
open(my $fh, '>', $log) or die "Foobar!!!" ;
print $fh "#$_ \n" for #tmp ;
}
The subroutines do all the work here - you can likely find ways to shorten; combine; improve them. Is this a useful approach for you?
I hope this explanation is clear and useful to someone - corrections and comments welcome. Probably the same thing could be done with place editing (i.e with perl -pie '...') which is left as an exercise to those that follow ...
You need to open the output file itself. First in read mode, then in write mode.
I have written a script that does what you are asking. What really matters is the part that appends new data to old data. Adapt that to your fileSize function.
So you have the output file, output.txt
Of the form,
one 5
two 3
three 1
And an array of input files, input1.txt, input2.txt, etc, saved in the #inputfiles variable.
Of the form,
>one
AAAAA
>two
BBB
>three
C
>four
DAS
and
>one
AAAAABB
>two
BBBVFVF
>three
CS
Respectively.
After running the following perl script,
# First read previous output file.
open OUT, '<', "output.txt" or die $!;
my #outlines;
while (my $line = <OUT> ) {
chomp $line;
push #outlines, $line;
}
close OUT;
my $outsize = scalar #outlines;
# Suppose you have your array of input file names already prepared
my #inputfiles = ("input1.txt", "input2.txt");
foreach my $file (#inputfiles) {
open IN, '<', $file or die $!;
my $counter = 1; # Used to compare against output size
while (my $line = <IN>) {
chomp $line;
$line =~ m/^>(.*)$/;
my $name = $1;
my $sequence = <IN>;
chomp $sequence;
my $seqsize = length($sequence);
# Here is where I append a column to output data.
if($counter <= $outsize) {
$outlines[$counter - 1] .= " $seqsize";
} else {
$outlines[$counter - 1] = "$name $seqsize";
}
$counter++;
}
close IN;
}
# Now rewrite the results to output.txt
open OUT, '>', "output.txt" or die $!;
foreach (#outlines) {
print OUT "$_\n";
}
close OUT;
You generate the output,
one 5 5 7
two 3 3 7
three 1 1 2
four 3

Detect first or second file in a one-liner

In AWK, it is common to see this kind of structure for a script that runs on two files:
awk 'NR==FNR { print "first file"; next } { print "second file" }' file1 file2
Which uses the fact that there are two variables defined: FNR, which is the line number in the current file and NR which is the global count (equivalent to Perl's $.).
Is there something similar to this in Perl? I suppose that I could maybe use eof and a counter variable:
perl -nE 'if (! $fn) { say "first file" } else { say "second file" } ++$fn if eof' file1 file2
This works but it feels like I might be missing something.
To provide some context, I wrote this answer in which I manually define a hash but instead, I would like to populate the hash from the values in the first file, then do the substitutions on the second file. I suspect that there is a neat, idiomatic way of doing this in Perl.
Unfortunately, perl doesn't have a similar NR==FNR construct to differentiate between two files. What you can do is use the BEGIN block to process one file and main body to process the other.
For example, to process a file with the following:
map.txt
a=apple
b=ball
c=cat
d=dog
alpha.txt
f
a
b
d
You can do:
perl -lne'
BEGIN {
$x = pop;
%h = map { chomp; ($k,$v) = split /=/; $k => $v } <>;
#ARGV = $x
}
print join ":", $_, $h{$_} //= "Not Found"
' map.txt alpha.txt
f:Not Found
a:apple
b:ball
d:dog
Update:
I gave a pretty simple example, and now when I look at that, I can only say TIMTOWDI since you can do:
perl -F'=' -lane'
if (#F == 2) { $h{$F[0]} = $F[1]; next }
print join ":", $_, $h{$_} //= "Not Found"
' map.txt alpha.txt
f:Not Found
a:apple
b:ball
d:dog
However, I can say for sure, there is no NR==FNR construct for perl and you can probably process them in various different ways based on the files.
It looks like what you're aiming for is to use the same loop for reading both files, and have a conditional inside the loop that chooses what to do with the data. I would avoid that idea because you are hiding what two distinct processes in the same stretch of code, making it less than clear what is going on.
But, in the case of just two files, you could compare the current file with the first element of #ARGV, like this
perl -nE 'if ($ARGV eq $ARGV[0]) { say "first file" } else { say "second file" }' file1 file2
Forgetting about one-line programs, which I hate with a passion, I would just explicitly open $ARGV[0] and $ARGV[1]. Perhaps naming them like this
use strict;
use warnings;
use 5.010;
use autodie;
my ($definitions, $data) = #ARGV;
open my $fh, '<', $definitions;
while (<$fh>) {
# Build hash
}
open $fh, '<', $data;
while (<$fh>) {
# Process file
}
But if you want to avail yourself of the automatic opening facilities then you can mess with #ARGV like this
use strict;
use warnings;
my ($definitions, $data) = #ARGV;
#ARGV = ($definitions);
while (<>) {
# Build hash
}
#ARGV = ($data);
while (<>) {
# Process file
}
You can also create your own $fnr and compare to $..
Given:
var='first line
second line'
echo "$var" >f1
echo "$var" >f2
echo "$var" >f3
You can create a pseudo FNR by setting a variable in the BEGIN block and resetting at each eof:
perl -lnE 'BEGIN{$fnr=1;}
if ($fnr==$.) {
say "first file: $ARGV, $fnr, $. $_";
}
else {
say "$ARGV, $fnr, $. $_";
}
eof ? $fnr=1 : $fnr++;' f{1..3}
Prints:
first file: f1, 1, 1 first line
first file: f1, 2, 2 second line
f2, 1, 3 first line
f2, 2, 4 second line
f3, 1, 5 first line
f3, 2, 6 second line
Definitely not as elegant as awk but it works.
Note that Ruby has support for FNR==NR type logic.

Line by line editing to all the files in a folder ( and subfolders ) in perl

I want my perl program to do the substitution of '{' - > '{function('.counter++.')' in all the files except the lines when there is a '{' and a '}' in the same line, and except when the '{' appears one line under a 'typedef' substring.
#!/usr/bin/perl
use strict;
use warnings;
use Tie::File;
use File::Find;
my $dir = "C:/test dir";
# fill up our argument list with file names:
find(sub { if (-f && /\.[hc]$/) { push #ARGV, $File::Find::name } }, $dir);
$^I = ".bak"; # supply backup string to enable in-place edit
my $counter = 0;
# now process our files
while (<>)
{
my #lines;
# copy each line from the text file to the string #lines and add a function call after every '{' '
tie #lines, 'Tie::File', $ARGV or die "Can't read file: $!\n"
foreach (#lines)
{
if (!( index (#lines,'}')!= -1 )) # if there is a '}' in the same line don't add the macro
{
s/{/'{function(' . $counter++ . ')'/ge;
print;
}
}
untie #lines; # free #lines
}
what I was trying to do is to go through all the files in #ARGV that i found in my dir and subdirs and for each *.c or *.h file I want to go line by line and check if this line contains '{'. if it does the program won't check if there is a '{' and won't make the substitution, if it doesn't the program will substitute '{' with '{function('.counter++.');'
unfortunately this code does not work. I'm ashamed to say that I'm trying to make it work all day and still no go.I think that my problem is that I'm not really working with lines where I search for '{' but I don't understand why. I would really appreciate some help.
I would also like to add that I am working in windows environment.
Thank You!!
Edit: so far with your help this is the code:
use strict;
use warnings;
use File::Find;
my $dir = "C:/projects/SW/fw/phy"; # use forward slashes in paths
# fill up our argument list with file names:
find(sub { if (-f && /\.[hc]$/) { push #ARGV, $File::Find::name } }, $dir);
$^I = ".bak"; # supply backup string to enable in-place edit
my $counter = 0;
# now process our files
while (<>) {
s/{/'{ function(' . $counter++ . ')'/ge unless /}/;
print;
}
The only thing that is left to do is to make it ignore '{' substitution when it is one line under 'typedef' substring like this:
typedef struct
{
}examp;
I would greatly appreciate your help! Thank you! :)
Edit #2: This is the final code:
use strict;
use warnings;
use File::Find;
my $dir = "C:/exmp";
# fill up our argument list with file names:
find(sub { if (-f && /\.[hc]$/) { push #ARGV, $File::Find::name } }, $dir);
$^I = ".bak"; # supply backup string to enable in-place edit
my $counter = 0;
my $td = 0;
# now process our files
while (<>) {
s/{/'{ function(' . $counter++ . ')'/ge if /{[^}]*$/ && $td == 0;
$td = do { (/typedef/ ? 1 : 0 ) || ( (/=/ ? 1 : 0 ) && (/if/ ? 0 : 1 ) && (/while/ ? 0 : 1 ) && (/for/ ? 0 : 1 ) && (/switch/ ? 0 : 1 ) )};
print;
}
The code does the substitution except when the line above the substitution place included 'typedef',
When the line above it included '=' and no 'if', 'while', 'for' or 'switch' the substitiution will also not happen.
Thank you all for your help!
The -i swith let you presise an extension for backup files.
Using perl:
perl -pe "/{[^}]*\$/&&do{s/{/{function('.counter++.');/}" -i.bak *
or (same result):
perl -pe "s/{/{function('.counter++.');/ if /{[^}]*\$/" -i.bak *
And for processing all files in sub-folder too, this could be simplier to use find:
find . -type f -print0 |
xargs -0 perl -pe "s/{/{function('.counter++.');/ if /{[^}]*\$/" -i.bak
Using GNU sed let you do the job very quickly
sed -e "/{[^}]*\$/{s/{/{function('.counter++.');/}" -i.bak *
Edit For doing modification only if previous line don't contain word typedef:
perl -pe "BEGIN { my \$td=1; };s/{/{function('.counter++.');/ if /{[^}]*\$/ && \$td==1 ; \$td=do{/typedef/?0:1};" -i.bak *
could be written;
perl -pe "
BEGIN { my \$td=0; };
s/{/{function('.counter++.');/ if /{[^}]*\$/ && \$td==0 ;
\$td=do{/typedef/?1:0};" -i.bak *
or more readable as
perl -pe '
BEGIN { my $td=0; };
s/{/{function(\047.counter++.\047);/ if /{[^}]*$/ && $td==0;
$td=do{/typedef/?1:0};
' -i.bak *
Or as a perl script file: cat >addFunction.pl
#!/usr/bin/perl -pi.bak
BEGIN { my $td = 0; }
s/{/{function(\047.counter++.\047);/ if /{[^}]*$/ && $td == 0;
$td = do { /typedef/ ? 1 : 0 };
Explained:
BEGIN{...} command block to do at begin of program.
s/// if // && to replacement if current match match and $td=0
$td=do{ aaa ? bbb : ccc } assing to td: if aaa then bbb else ccc.
As perl run sequetialy, $td keep his value until next assignement. So if test for replacement is doing before $td assignement, the check will use previous value.
And finaly, same using sed:
sed -e '/{[^}]*$/{x;/./{x;s/{/{function(\o047.counter++.\o047);/;x;};x;};h;s/^.*typedef.*$//;x;' -i.bak *
or more readable:
sed -e '
/{[^}]*$/{
x;
/./{
x;
s/{/{function(\o047.counter++.\o047);/;
x;
};
x;
};
h;
s/^/./;
s/^.*typedef.*$//;
x;
' -i.bak *
Some sed tricks:
h store (backup) current line to the hold space
x exchange current working line with the hold space
s/// well known replacement string command
\o047 octal tick: '
/{[^}]*$/{ ... } Command block to do only on lines maching { and no }.
/./{ ... } Command block to do only on lines containing at least 1 character
Here is one way to skip the substitution if '}' exists:
if ( $_ !~ /}/ ) { # same as !( $_ =~ /}/ )
s/{/'{function(' . $counter++ . ')'/ge;
}
Make sure that the print is outside the conditional though, or the line won't be printed if the '}' is missing.
Other ways to write it:
unless ( /}/ ) {
s/{/'{function(' . $counter++ . ')'/ge;
}
Or simply:
s/{/'{function(' . $counter++ . ')'/ge unless /}/;
I think this issue is you're looking for the index in #lines. You want this:
while (<>)
{
my #lines;
# copy each line from the text file to the string #lines and add a function call after every '{' '
tie #lines, 'Tie::File', $ARGV or die "Can't read file: $!\n"
foreach my $line(#lines)
{
if ( index ($lines,'}')== -1 ) # if there is a '}' in the same line don't add the macro
{
$line =~ s/{/'{function(' . $counter++ . ')'/ge;
}
print $line;
}
untie #lines; # free #lines
}
I'm also a bit unclear about how $ARGV is being set. You may want to use $_ in that place based on how you have your script.

How do I append some lines from several text files to another one?

I have got nine text files in a directory, each has 1000 lines. I want to take the first 500 lines from each, then write all of them in order to another text file, and take the rest (the last 500 lines) from each one to do the same I do before.
awk '{if (NR<=500) {print}}' 1.txt > 2.txt # I do it 9 times, then I use cat to append.
awk '{if (NR>500) {print}}' 3.txt > 4.txt
or
awk 'NR>500' 3.txt > 4.txt
I did it with awk, but I want to learn Perl instead.
In Perl $. has line number of last accessed filehandle. In while ($. <=500 ) -cycle you can get wanted count of lines.
perl -e 'open(F1,">1.txt");
open(F2,">2.txt");
foreach (#ARGV) {
open(F,"<$_");
while(<F>) {
print F1 $_ if ($. <= 500);
print F2 $_ if ($. > 500);
}
close(F);
}
close(F1);
close(F2);' <FILES>
Your explanation could agree with your example more. But based on the idea that you want all 9000 lines to go into a single file.
I didn't know where you were going to specify your names, so I used the command line.
use English qw<$OS_ERROR>;
open( my $out_h, '>', $outfile_name )
or die "Could not open '$outfile_name'! - $OS_ERROR"
;
my #input_h;
foreach my $name ( #ARGV ) {
open( $input_h[ $_ ], '<', $name )
or die "Could not open '$name'! - $OS_ERROR"
;
}
foreach my $in_h ( #input_h ) {
my $lines_written = 0;
while ( $lines_written++ < 500 ) {
print $out_h scalar <$in_h>;
}
}
foreach my $in_h ( #input_h ) {
print $out_h <$in_h>;
}
close $out_h;
close $_ foreach #input_h;
open(F1,">1.txt");
open(F2,">2.txt");
open(F,"<$_");
while(<F>) {
print F1 $_ if ($. <= 500);
print F2 $_ if ($. > 500);
}
close(F);
close(F1);
close(F2);
i deleted foreach statement then it works, isnt that weird. thanks for ur help by the way..