This question already has answers here:
Replacing text in a file from a list in another file?
(4 answers)
Closed 9 years ago.
I'm not even sure if this is possible, but sure am hoping that it is.
I have this line 766 times in the file backup.xml:
*** Hosting Services
I then have the file list.txt which contains 766 lines in it. I need to replace *** with the contents of each of the 766 lines in list.txt - and it needs to be in the same order if at all possible.
Thanks in advance for any help!
Idea:
loop over the lines of the B(ackup file)
if you (F)ind a B-line to change
read the next line of the L(ist file)
change
print the line to R(result file)
Plan:
read_open B
read_open L
write_open R
while (line from B)
if (F) {
read replacemment from L
change line
}
print line to R
}
close R, L, B
Implementation I (read_open, loop, look at B):
use strict;
use warnings;
use English qw(-no_match_vars);
my $bfn = '../data/AA-backup-xml';
open my $bfh, '<', $bfn or die "Can't read open '$bfn': $OS_ERROR";
while (my $line = <$bfh>) {
print $line;
}
close $bfh or die "Can't read close '$bfn': $OS_ERROR";
output:
perl 01.pl
whatever
whatever
*** Hosting Services
whatever
whatever
whatever
*** Hosting Services
whatever
whatever
*** Hosting Services
whatever
whatever
whatever
*** Hosting Services
Implementation II (read/write, F, replace, first result):
use Modern::Perl;
use English qw(-no_match_vars);
my $bfn = '../data/AA-backup-xml';
open my $bfh, '<', $bfn or die "Can't read open '$bfn': $OS_ERROR";
my $lfn = '../data/AA-list.txt';
open my $lfh, '<', $lfn or die "Can't read open '$lfn': $OS_ERROR";
my $rfn = '../data/AA-result';
open my $rfh, '>', $rfn or die "Can't write open 'rlfn': $OS_ERROR";
while (my $line = <$bfh>) {
if ($line =~ /\*{3}/) {
my $rpl = <$lfh>;
$rpl = substr($rpl, 0, 3);
$line =~ s/\*{3}/$rpl/;
}
print $rfh $line;
}
close $rfh or die "Can't write close '$rfn': $OS_ERROR";
close $lfh or die "Can't read close '$lfn': $OS_ERROR";
close $bfh or die "Can't read close '$bfn': $OS_ERROR";
output:
type ..\data\AA-result
whatever
whatever
001 Hosting Services
whatever
whatever
whatever
002 Hosting Services
whatever
whatever
003 Hosting Services
whatever
whatever
whatever
004 Hosting Services
If this does not 'work' for you (perhaps I mis-guessed the structur of B or the F strategy is too naive), then publish a representative sample of B, L, and R.
You can use Tie::File to look at/modify files through representing them as arrays, ie:
use strict;
use warnings;
use Tie::File;
tie my #ra1, 'Tie::File', "test.txt" or die; #*** Hosting Services
tie my #ra2, 'Tie::File', "test1.txt" or die; #stuff you want to replace *** with
#assumes equal length
for (my $i=0; $i <= $#ra1; $i++)
{
my $temp=$ra2[$i];
$ra1[$i]=~s/(\*{3})/$temp/;
}
untie #ra1;
untie #ra2;
The above code replaces *** with the corresponding line of your list file. By writing $ra1[$i]=~s/(\*{3})/$temp/ we are directly changing the file that #ra1 is tied to.
#ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
SET /a linenum=0
(
FOR /f "delims=" %%i IN ('findstr /n "$" ^<backup.xml') DO (
SET "line=%%i"
SET "line=!line:*:=!"
IF DEFINED line (
IF "!line!"=="*** Hosting Services" (
SET /a linenum+=1
FOR /f "tokens=1*delims=:" %%r IN ('findstr /n "$" ^<list.txt') DO (
IF !linenum!==%%r (ECHO(%%s Hosting Services)
)
) ELSE (ECHO(!line!)
) ELSE (ECHO()
)
)>new.xml
GOTO :eof
IIUC, this should replace each "*** Hosting Services" line from backup.xml with line corresponding from list.txt *** Hosting Services, creating a new file new.xml
It's pretty short and sweet in awk:
awk '
NR == FNR {list[FNR]=$0; next}
/\*\*\* Hosting Services/ {sub(/\*\*\*/, list[++count])}
{print}
' list.txt backup.xml > new_backup.xml
a2p turns that into
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
line: while (<>) {
chomp; # strip record separator
if ($. == ($.-$FNRbase)) {
$list{($.-$FNRbase)} = $_;
next line;
}
if (/\*\*\* Hosting Services/) {
($s_ = '"'.($list{++$count}).'"') =~ s/&/\$&/g, s/\*\*\*/eval $s_/e;
}
print $_;
}
continue {
$FNRbase = $. if eof;
}
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
open ( F1, "file1.txt" );
open ( F2, "+>file2.txt" );
$/ = "\n\n" ;
while (<F1>) {
print F2;
$/ = "\n" ;
#arr = <F2> ;
#found = grep(/^: /, #arr);
if( $#found == -1) {
truncate(F2, $length);
}
$/ = "\n\n" ;
}
Please help me to find the error in this code.
file1.txt:
:a
:b
: x
:y
note::a , :b and : x , :y are separated by "\n" and :b and :x by "\n\n"
Expected contents in file2.txt after execution of program:
: x
:y
There are things wrong with your program, but its difficult to tell what you might be referring to, since you do not actually specify a problem or a question.
open ( F1, "file1.txt" );
open ( F2, "+>file2.txt" );
You should use three argument open, with explicit mode and lexical file handle. Also, you should check the return value of the open to see that it did not fail and why. Also, using better names for file handles does not hurt.
open my $in, "<", "file1.txt" or die "Cannot open file1.txt for reading: $!";
open my $out, "+>", "file2.txt" or die "Cannot open file2.txt for overwrite: $!";
Do note that the file open mode +> will truncate your file when you open it, but allow you to both read and write to/from the file handle. Most of the time, you do not want this.
$/ = "\n\n" ;
while (<F1>) {
Setting the input record separator to $/ will read paragraphs from your input file, in your case (assuming I got the formatting right), that would be
$_ = ":a
:b
";
You then print this value to your output file
print F2; # this means "print F2 $_"
You then change the input record separator again, and read all the lines in your output file:
$/ = "\n" ;
#arr = <F2> ;
But unfortunately, this is wrong, because the position of the file handle will be at the end of file (eof), because this is a file handle your are printing to. So #arr will be empty.
#found = grep(/^: /, #arr);
if( $#found == -1) {
truncate(F2, $length);
}
So this code, with the truncate will always happen. Also, of course, $length is an undefined variable, so it will give you a warning such as Use of uninitialized value $length in truncate at ... unless you have been so foolish as to not use:
use warnings;
I assume that what you are trying to do here is to check the input received before printing it to the output, but you should know that trying to print and truncate afterwards is a horrible idea. Why not check it before printing it instead? That's not only how its done 99.99% of the times, its also the simplest and most logical way to do it.
if (/^: /) {
print $out $_;
}
/^: / is short for $_ =~ /^: /, in case you are uncertain -- it applies a regex match operator to the default input variable $_, which are what you are reading to in the while (<F1>) loop condition, which is short for while ($_ = <F1>)
In your programs you should always use
use strict;
use warnings;
And learn to use them, because they will save you lots of time when debugging, and give you vital information about what your program is doing.
So you get:
use strict;
use warnings;
open my $in, "<", "file1.txt" or die "Cannot open file1.txt for reading: $!";
open my $out, "+>", "file2.txt" or die "Cannot open file2.txt for overwrite: $!";
$/ = "\n\n";
while (<$in>) {
if (/^: /) {
print $out $_;
}
}
Be advised that you can solve this with a simple one-liner program
perl -00 -nle 'print if /^: /' file1.txt > file2.txt
Good afternoon,
I have a small Perl script that essentially emulated the tail -f functionality on a Minecraft server.log. The script checks for certain strings and acts in various ways.
A simplified version of the script is below:
#!/usr/bin/perl
use 5.010;
use warnings;
use strict;
my $log = "PATH TO LOG";
my $curpos;
open(my $LOGFILE, $log) or die "Cannot open log file";
# SEEK TO EOF
seek($LOGFILE, 0, 2);
for (;;){
my $line = undef;
seek($LOGFILE,0,1); ### clear OF condition
for($curpos = tell($LOGFILE); <$LOGFILE>; $curpos = tell($LOGFILE)){
$line = "$_ \n";
if($line =~ /test string/i){
say "Found test string!";
}
}
sleep 1;
seek($LOGFILE,$curpos,0); ### Setting cursor at the EOF
}
When I had a test server up everything seemed to work fine. In production, the server.log file gets rotated. When a log gets rotated, the script keeps hold of original file, and not the file that replaces it.
I.e. server.log is being monitored, server.log gets moved and compressed to logs/date_x.log.gz, server.log is now a new file.
How can I adapt my script to monitor the filename "server.log", rather than the file that is currently called "server.log"?
Have you considered just using tail -F as the input to your script:
tail -F server.log 2>/dev/null | perl -nE 'say if /match/'
This will watch the named file, passing each line to your script on STDIN. It will correctly track only server.log, as shown below:
echo 'match' >server.log
(matched by script)
mv server.log server.log.old
echo 'match' >server.log
(also matched)
You can open the tail -F as a file in Perl using:
open(my $fh, '-|', 'tail -F server.log 2>/dev/null') or die "$!\n";
You can check inode numbers for both, file name and file handle using stat(). If they differ, log file was rotated and file should be reopened for reading.
$readline is iterator (obtained by get_readline($file_name)) which transparently takes care of such changes, and does "right thing".
use strict;
use warnings;
sub get_readline {
my ($fname) = #_;
my $fh;
return sub {
my ($i1, $i2) = map { $_ ? (stat $_)[1] : 0 } $fh, $fname;
if ($i1 != $i2) {
undef $fh;
open $fh, "<", $fname or return;
}
# reset handle to current position
seek($fh, 0, 1) or die $!;
return wantarray ? <$fh> : scalar <$fh>;
};
}
`seq 11 > log_file`;
my $readline = get_readline("log_file");
print "[regular reading]\n";
print $readline->();
print "[any new content?]\n";
print $readline->();
`rm log_file; seq 11 > log_file`;
print "[reading after log rotate]\n";
print $readline->();
output
[regular reading]
1
2
3
4
5
6
7
8
9
10
11
[any new content?]
[reading after log rotate]
1
2
3
4
5
6
7
8
9
10
11
input file:
1,a,USA,,
2,b,UK,,
3,c,USA,,
i want to update the 4th column in the input file from taking values from one of the table.
my code looks like this:
my $number_dbh = DBI->connect("DBI:Oracle:$INST", $USER, $PASS ) or die "Couldn't
connect to datbase $INST";
my $num_smh;
print "connected \n ";
open FILE , "+>>$input_file" or die "can't open the input file";
print "echo \n";
while(my $line=<FILE>)
{
my #line_a=split(/\,/,$line);
$num_smh = $number_dbh->prepare("SELECT phone_no from book where number = $line_a[0]");
$num_smh->execute() or die "Couldn't execute stmt, error : $DBI::errstr";
my $number = $num_smh->fetchrow_array();
$line_a[3]=$number;
}
Looks like your data is in CSV format. You may want to use Parse::CSV.
+>> doesn't do what you think it does. In fact, in testing it doesn't seem to do anything at all. Further, +< does something very strange:
% cat file.txt
1,a,USA,,
2,b,UK,,
3,c,USA,,
% cat update.pl
#!perl
use strict;
use warnings;
open my $fh, '+<', 'file.txt' or die "$!";
while ( my $line = <$fh> ) {
$line .= "hello\n";
print $fh $line;
}
% perl update.pl
% cat file.txt
1,a,USA,,
1,a,USA,,
hello
,,
,,
hello
%
+> appears to truncate the file.
Really, what you want to do is to write to a new file, then copy that file over the old one. Opening a file for simultaneous read/write looks like you'd be entering a world of hurt.
As an aside, you should use the three-argument form of open() (safer for "weird" filenames) and use lexical filehandles (they're not global, and when they go out of scope your file automatically closes for you).
I have several text files, that were once tables in a database, which is now disassembled. I'm trying to reassemble them, which will be easy, once I get them into a usable form. The first file, "keys.text" is just a list of labels, inconsistently formatted. Like:
Sa 1 #
Sa 2
U 328 #*
It's always letter(s), [space], number(s), [space], and sometime symbol(s). The text files that match these keys are the same, then followed by a line of text, also separated, or delimited, by a SPACE.
Sa 1 # Random line of text follows.
Sa 2 This text is just as random.
U 328 #* Continuing text...
What I'm trying to do in the code below, is match the key from "keys.text", with the same key in the .txt files, and put a tab between the key, and the text. I'm sure I'm overlooking something very basic, but the result I'm getting, looks identical to the source .txt file.
Thanks in advance for any leads or assistance!
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
open(IN1, "keys.text");
my $key;
# Read each line one at a time
while ($key = <IN1>) {
# For each txt file in the current directory
foreach my $file (<*.txt>) {
open(IN, $file) or die("Cannot open TXT file for reading: $!");
open(OUT, ">temp.txt") or die("Cannot open output file: $!");
# Add temp modified file into directory
my $newFilename = "modified\/keyed_" . $file;
my $line;
# Read each line one at a time
while ($line = <IN>) {
$line =~ s/"\$key"/"\$key" . "\/t"/;
print(OUT "$line");
}
rename("temp.txt", "$newFilename");
}
}
EDIT: Just to clarify, the results should retain the symbols from the keys as well, if there are any. So they'd look like:
Sa 1 # Random line of text follows.
Sa 2 This text is just as random.
U 328 #* Continuing text...
The regex seems quoted rather oddly to me. Wouldn't
$line =~ s/$key/$key\t/;
work better?
Also, IIRC, <IN1> will leave the newline on the end of your $key. chomp $key to get rid of that.
And don't put parentheses around your print args, esp when you're writing to a file handle. It looks wrong, whether it is or not, and distracts people from the real problems.
if Perl is not a must, you can use this awk one liner
$ cat keys.txt
Sa 1 #
Sa 2
U 328 #*
$ cat mytext.txt
Sa 1 # Random line of text follows.
Sa 2 This text is just as random.
U 328 #* Continuing text...
$ awk 'FNR==NR{ k[$1 SEP $2];next }($1 SEP $2 in k) {$2=$2"\t"}1 ' keys.txt mytext.txt
Sa 1 # Random line of text follows.
Sa 2 This text is just as random.
U 328 #* Continuing text...
Using split rather than s/// makes the problem straightforward. In the code below, read_keys extracts the keys from keys.text and records them in a hash.
Then for all files named on the command line, available in the special Perl array #ARGV, we inspect each line to see whether it begins with a key. If not, we leave it alone, but otherwise insert a TAB between the key and the text.
Note that we edit the files in-place thanks to Perl's handy -i option:
-i[extension]
specifies that files processed by the <> construct are to be edited in-place. It does this by renaming the input file, opening the output file by the original name, and selecting that output file as the default for print statements. The extension, if supplied, is used to modify the name of the old file to make a backup copy …
The line split " ", $_, 3 separates the current line into exactly three fields. This is necessary to protect whitespace that's likely to be present in the text portion of the line.
#! /usr/bin/perl -i.bak
use warnings;
use strict;
sub usage { "Usage: $0 text-file\n" }
sub read_keys {
my $path = "keys.text";
open my $fh, "<", $path
or die "$0: open $path: $!";
my %key;
while (<$fh>) {
my($text,$num) = split;
++$key{$text}{$num} if defined $text && defined $num;
}
wantarray ? %key : \%key;
}
die usage unless #ARGV;
my %key = read_keys;
while (<>) {
my($text,$num,$line) = split " ", $_, 3;
$_ = "$text $num\t$line" if defined $text &&
defined $num &&
$key{$text}{$num};
print;
}
Sample run:
$ ./add-tab input
$ diff -u input.bak input
--- input.bak 2010-07-20 20:47:38.688916978 -0500
+++ input 2010-07-20 21:00:21.119531937 -0500
## -1,3 +1,3 ##
-Sa 1 # Random line of text follows.
-Sa 2 This text is just as random.
-U 328 #* Continuing text...
+Sa 1 # Random line of text follows.
+Sa 2 This text is just as random.
+U 328 #* Continuing text...
Fun answers:
$line =~ s/(?<=$key)/\t/;
Where (?<=XXXX) is a zero-width positive lookbehind for XXXX. That means it matches just after XXXX without being part of the match that gets substituted.
And:
$line =~ s/$key/$key . "\t"/e;
Where the /e flag at the end means to do one eval of what's in the second half of the s/// before filling it in.
Important note: I'm not recommending either of these, they obfuscate the program. But they're interesting. :-)
How about doing two separate slurps of each file. For the first file you open the keys and create a preliminary hash. For the second file then all you need to do is add the text to the hash.
use strict;
use warnings;
my $keys_file = "path to keys.txt";
my $content_file = "path to content.txt";
my $output_file = "path to output.txt";
my %hash = ();
my $keys_regex = '^([a-zA-Z]+)\s*\(d+)\s*([^\da-zA-Z\s]+)';
open my $fh, '<', $keys_file or die "could not open $key_file";
while(<$fh>){
my $line = $_;
if ($line =~ /$keys_regex/){
my $key = $1;
my $number = $2;
my $symbol = $3;
$hash{$key}{'number'} = $number;
$hash{$key}{'symbol'} = $symbol;
}
}
close $fh;
open my $fh, '<', $content_file or die "could not open $content_file";
while(<$fh>){
my $line = $_;
if ($line =~ /^([a-zA-Z]+)/){
my $key = $1;
// strip content_file line from keys/number/symbols to leave text
line =~ s/^$key//;
line =~ s/\s*$hash{$key}{'number'}//;
line =~ s/\s*$hash{$key}{'symbol'}//;
$line =~ s/^\s+//g;
$hash{$key}{'text'} = $line;
}
}
close $fh;
open my $fh, '>', $output_file or die "could not open $output_file";
for my $key (keys %hash){
print $fh $key . " " . $hash{$key}{'number'} . " " . $hash{$key}{'symbol'} . "\t" . $hash{$key}{'text'} . "\n";
}
close $fh;
I haven't had a chance to test it yet and the solution seems a little hacky with all the regex but might give you an idea of something else you can try.
This looks like the perfect place for the map function in Perl! Read in the entire text file into an array, then apply the map function across the entire array. The only other thing you might want to do is use the quotemeta function to escape out any possible regular expressions in your keys.
Using map is very efficient. I also read the keys into an array in order to not have to keep opening and closing the keys file in my loop. It's an O^2 algorithm, but if your keys aren't that big, it shouldn't be too bad.
#! /usr/bin/env perl
use strict;
use vars;
use warnings;
open (KEYS, "keys.text")
or die "Cannot open 'keys.text' for reading\n";
my #keys = <KEYS>;
close (KEYS);
foreach my $file (glob("*.txt")) {
open (TEXT, "$file")
or die "Cannot open '$file' for reading\n";
my #textArray = <TEXT>;
close (TEXT);
foreach my $line (#keys) {
chomp $line;
map($_ =~ s/^$line/$line\t/, #textArray);
}
open (NEW_TEXT, ">$file.new") or
die qq(Can't open file "$file" for writing\n);
print TEXT join("\n", #textArray) . "\n";
close (TEXT);
}
I want to print certain lines from a text file in Unix. The line numbers to be printed are listed in another text file (one on each line).
Is there a quick way to do this with Perl or a shell script?
Assuming the line numbers to be printed are sorted.
open my $fh, '<', 'line_numbers' or die $!;
my #ln = <$fh>;
open my $tx, '<', 'text_file' or die $!;
foreach my $ln (#ln) {
my $line;
do {
$line = <$tx>;
} until $. == $ln and defined $line;
print $line if defined $line;
}
$ cat numbers
1
4
6
$ cat file
one
two
three
four
five
six
seven
$ awk 'FNR==NR{num[$1];next}(FNR in num)' numbers file
one
four
six
You can avoid the limitations of the some of the other answers (requirements for sorted lines), simply by using eof within the context of a basic while(<>) block. That will tell you when you've stopped reading line numbers and started reading data. Note that you need to reset $. when the switch occurs.
# Usage: perl script.pl LINE_NUMS_FILE DATA_FILE
use strict;
use warnings;
my %keep;
my $reading_line_nums = 1;
while (<>){
if ($reading_line_nums){
chomp;
$keep{$_} = 1;
$reading_line_nums = $. = 0 if eof;
}
else {
print if exists $keep{$.};
}
}
cat -n foo | join foo2 - | cut -d" " -f2-
where foo is your file with lines to print and foo2 is your file of line numbers
Here is a way to do this in Perl without slurping anything so that the memory footprint of the program is independent of the sizes of both files (it does assume that the line numbers to be printed are sorted):
#!/usr/bin/perl
use strict; use warnings;
use autodie;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
open my $src_h, '<', $src_file;
open my $filter_h, '<', $filter_file;
my $to_print = <$filter_h>;
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
close $filter_h;
close $src_h;
Generate the source file:
C:\> perl -le "print for aa .. zz" > src
Generate the filter file:
C:\> perl -le "print for grep { rand > 0.75 } 1 .. 52" > filter
C:\> cat filter
4
6
10
12
13
19
23
24
28
44
49
50
Output:
C:\> f src filter
ad
af
aj
al
am
as
aw
ax
bb
br
bw
bx
To deal with an unsorted filter file, you can modified the while loop:
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. > $to_print ) {
seek $src_h, 0, 0;
$. = 0;
}
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
This would waste a lot of time if the contents of the filter file are fairly random because it would keep rewinding to the beginning of the source file. In that case, I would recommend using Tie::File.
I wouldn't do it this way with large files, but (untested):
open(my $fh1, "<", "line_number_file.txt") or die "Err: $!";
chomp(my #line_numbers = <$fh1>);
$_-- for #line_numbers;
close $fh1;
open(my $fh2, "<", "text_file.txt") or die "Err: $!";
my #lines = <$fh2>;
print #lines[#line_numbers];
close $fh2;
I'd do it like this:
#!/bin/bash
numbersfile=numbers
datafile=data
while read lineno < $numbersfile; do
sed -n "${lineno}p" datafile
done
Downside to my approach is that it will spawn a lot of processes so it will be slower than other options. It's infinitely more readable though.
This is a short solution using bash and sed
sed -n -e "$(cat num |sed 's/$/p/')" file
Where num is the file of numbers and file is the input file ( Tested on OS/X Snow leopard)
$ cat num
1
3
5
$ cat file
Line One
Line Two
Line Three
Line Four
Line Five
$ sed -n -e "$(cat num |sed 's/$/p/')" file
Line One
Line Three
Line Five
$ cat input
every
good
bird
does
fly
$ cat lines
2
4
$ perl -ne 'BEGIN{($a,$b) = `cat lines`} print if $.==$a .. $.==$b' input
good
bird
does
If that's too much for a one-liner, use
#! /usr/bin/perl
use warnings;
use strict;
sub start_stop {
my($path) = #_;
open my $fh, "<", $path
or die "$0: open $path: $!";
local $/;
return ($1,$2) if <$fh> =~ /\s*(\d+)\s*(\d+)/;
die "$0: $path: could not find start and stop line numbers";
}
my($start,$stop) = start_stop "lines";
while (<>) {
print if $. == $start .. $. == $stop;
}
Perl's magic open allows for creative possibilities such as
$ ./lines-between 'tac lines-between|'
print if $. == $start .. $. == $stop;
while (<>) {
Here is a way to do this using Tie::File:
#!/usr/bin/perl
use strict; use warnings;
use autodie;
use Tie::File;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
tie my #source, 'Tie::File', $src_file, autochomp => 0
or die "Cannot tie source '$src_file': $!";
open my $filter_h, '<', $filter_file;
while ( my $to_print = <$filter_h> ) {
print $source[$to_print - 1];
}
close $filter_h;
untie #source;