Good afternoon,
I have a small Perl script that essentially emulated the tail -f functionality on a Minecraft server.log. The script checks for certain strings and acts in various ways.
A simplified version of the script is below:
#!/usr/bin/perl
use 5.010;
use warnings;
use strict;
my $log = "PATH TO LOG";
my $curpos;
open(my $LOGFILE, $log) or die "Cannot open log file";
# SEEK TO EOF
seek($LOGFILE, 0, 2);
for (;;){
my $line = undef;
seek($LOGFILE,0,1); ### clear OF condition
for($curpos = tell($LOGFILE); <$LOGFILE>; $curpos = tell($LOGFILE)){
$line = "$_ \n";
if($line =~ /test string/i){
say "Found test string!";
}
}
sleep 1;
seek($LOGFILE,$curpos,0); ### Setting cursor at the EOF
}
When I had a test server up everything seemed to work fine. In production, the server.log file gets rotated. When a log gets rotated, the script keeps hold of original file, and not the file that replaces it.
I.e. server.log is being monitored, server.log gets moved and compressed to logs/date_x.log.gz, server.log is now a new file.
How can I adapt my script to monitor the filename "server.log", rather than the file that is currently called "server.log"?
Have you considered just using tail -F as the input to your script:
tail -F server.log 2>/dev/null | perl -nE 'say if /match/'
This will watch the named file, passing each line to your script on STDIN. It will correctly track only server.log, as shown below:
echo 'match' >server.log
(matched by script)
mv server.log server.log.old
echo 'match' >server.log
(also matched)
You can open the tail -F as a file in Perl using:
open(my $fh, '-|', 'tail -F server.log 2>/dev/null') or die "$!\n";
You can check inode numbers for both, file name and file handle using stat(). If they differ, log file was rotated and file should be reopened for reading.
$readline is iterator (obtained by get_readline($file_name)) which transparently takes care of such changes, and does "right thing".
use strict;
use warnings;
sub get_readline {
my ($fname) = #_;
my $fh;
return sub {
my ($i1, $i2) = map { $_ ? (stat $_)[1] : 0 } $fh, $fname;
if ($i1 != $i2) {
undef $fh;
open $fh, "<", $fname or return;
}
# reset handle to current position
seek($fh, 0, 1) or die $!;
return wantarray ? <$fh> : scalar <$fh>;
};
}
`seq 11 > log_file`;
my $readline = get_readline("log_file");
print "[regular reading]\n";
print $readline->();
print "[any new content?]\n";
print $readline->();
`rm log_file; seq 11 > log_file`;
print "[reading after log rotate]\n";
print $readline->();
output
[regular reading]
1
2
3
4
5
6
7
8
9
10
11
[any new content?]
[reading after log rotate]
1
2
3
4
5
6
7
8
9
10
11
Related
I would like to know of a fast/efficient way in any program (awk/perl/python) to split a csv file (say 10k columns) into multiple small files each containing 2 columns. I would be doing this on a unix machine.
#contents of large_file.csv
1,2,3,4,5,6,7,8
a,b,c,d,e,f,g,h
q,w,e,r,t,y,u,i
a,s,d,f,g,h,j,k
z,x,c,v,b,n,m,z
I now want multiple files like this:
# contents of 1.csv
1,2
a,b
q,w
a,s
z,x
# contents of 2.csv
1,3
a,c
q,e
a,d
z,c
# contents of 3.csv
1,4
a,d
q,r
a,f
z,v
and so on...
I can do this currently with awk on small files (say 30 columns) like this:
awk -F, 'BEGIN{OFS=",";} {for (i=1; i < NF; i++) print $1, $(i+1) > i ".csv"}' large_file.csv
The above takes a very long time with large files and I was wondering if there is a faster and more efficient way of doing the same.
Thanks in advance.
The main hold up here is in writing so many files.
Here is one way
use warnings;
use strict;
use feature 'say';
my $file = shift // die "Usage: $0 csv-file\n";
my #lines = do { local #ARGV = $file; <> };
chomp #lines;
my #fhs = map {
open my $fh, '>', "f${_}.csv" or die $!;
$fh
}
1 .. scalar( split /,/, $lines[0] );
for (#lines) {
my ($first, #cols) = split /,/;
say {$fhs[$_]} join(',', $first, $cols[$_])
for 0..$#cols;
}
I didn't time this against any other approaches. Assembling data for each file first and then dumping it in one operation into each file may help, but first let us know how large the original CSV file is.
Opening so many output files at once (for #fhs filehandles) may pose problems. If that is the case then the simplest way is to first assemble all data and then open and write a file at a time
use warnings;
use strict;
use feature 'say';
my $file = shift // die "Usage: $0 csv-file\n";
open my $fh, '<', $file or die "Can't open $file: $!";
my #data;
while (<$fh>) {
chomp;
my ($first, #cols) = split /,/;
push #{$data[$_]}, join(',', $first, $cols[$_])
for 0..$#cols;
}
for my $i (0..$#data) {
open my $fh, '>', $i+1 . '.csv' or die $!;
say $fh $_ for #{$data[$i]};
}
This depends on whether the entire original CSV file, plus a bit more, can be held in memory.
With your show samples, attempts; please try following awk code. Since you are opening files all together it may fail with infamous "too many files opened error" So to avoid that have all values into an array and in END block of this awk code print them one by one and I am closing them ASAP all contents are getting printed to output file.
awk '
BEGIN{ FS=OFS="," }
{
for(i=1;i<NF;i++){
value[i]=(value[i]?value[i] ORS:"") ($1 OFS $(i+1))
}
}
END{
for(i=1;i<=NF;i++){
outFile=i".csv"
print value[i] > (outFile)
close(outFile)
}
}
' large_file.csv
I needed the same functionality and wrote it in bash.
Not sure if it will be faster than ravindersingh13's answer, but I hope it will help someone.
Actual version: https://github.com/pgrabarczyk/csv-file-splitter
#!/usr/bin/env bash
set -eu
SOURCE_CSV_PATH="${1}"
LINES_PER_FILE="${2}"
DEST_PREFIX_NAME="${3}"
DEBUG="${4:-0}"
split_files() {
local source_csv_path="${1}"
local lines_per_file="${2}"
local dest_prefix_name="${3}"
local debug="${4}"
_print_log "source_csv_path: ${source_csv_path}"
local dest_prefix_path="$(pwd)/output/${dest_prefix_name}"
_print_log "dest_prefix_path: ${dest_prefix_path}"
local headline=$(awk "NR==1" "${source_csv_path}")
local file_no=0
mkdir -p "$(dirname ${dest_prefix_path})"
local lines_in_files=$(wc -l "${source_csv_path}" | awk '{print $1}')
local files_to_create=$(((lines_in_files-1)/lines_per_file))
_print_log "There is ${lines_in_files} lines in file. I will create ${files_to_create} files per ${lines_per_file} (Last file may have less)"
_print_log "Start processing."
for (( start_line=1; start_line<=lines_in_files; )); do
last_line=$((start_line+lines_per_file))
file_no=$((file_no+1))
local file_path="${dest_prefix_path}$(printf "%06d" ${file_no}).csv"
if [ $debug -eq 1 ]; then
_print_log "Creating file ${file_path} with lines [${start_line};${last_line}]"
fi
echo "${headline}" > "${file_path}"
awk "NR>${start_line} && NR<=${last_line}" "${source_csv_path}" >> "${file_path}"
start_line=$last_line
done
_print_log "Done."
}
_print_log() {
local log_message="${1}"
local date_time=$(date "+%Y-%m-%d %H:%M:%S.%3N")
printf "%s - %s\n" "${date_time}" "${log_message}" >&2
}
split_files "${SOURCE_CSV_PATH}" "${LINES_PER_FILE}" "${DEST_PREFIX_NAME}" "${DEBUG}"
Execution:
bash csv-file-splitter.sh "sample.csv" 3 "result_" 1
Tried a solution using the module Text::CSV.
#! /usr/bin/env perl
use warnings;
use strict;
use utf8;
use open qw<:std :encoding(utf-8)>;
use autodie;
use feature qw<say>;
use Text::CSV;
my %hsh = ();
my $csv = Text::CSV->new({ sep_char => ',' });
print "Enter filename: ";
chomp(my $filename = <STDIN>);
open (my $ifile, '<', $filename);
while (<$ifile>) {
chomp;
if ($csv->parse($_)) {
my #fields = $csv->fields();
my $first = shift #fields;
while (my ($i, $v) = each #fields) {
push #{$hsh{($i + 1).".csv"}}, "$first,$v";
}
} else {
die "Line could not be parsed: $_\n";
}
}
close($ifile);
while (my ($k, $v) = each %hsh) {
open(my $ifile, '>', $k);
say {$ifile} $_ for #$v;
close($ifile);
}
exit(0);
it can display text in file, however, after i add new text in gedit, it do not show the updated one.
sub start_thread {
my #args = #_;
print('Thread started: ', #args, "\n");
open(my $myhandle,'<',#args) or die "unable to open file"; # typical open call
for (;;) {
while (<$myhandle>) {
chomp;
print $_."\n";
}
sleep 1;
seek FH, 0, 1; # this clears the eof flag on FH
}
}
update video
https://docs.google.com/file/d/0B4hnKBXrOBqRWEdjTDFIbHJselk/edit?usp=sharing
https://docs.google.com/file/d/0B4hnKBXrOBqRcEFhU3k4dUN4cXc/edit?usp=sharing
how to print $curpos for updated data
for (;;) {
for ($curpos = tell($myhandle); $_ = <$myhandle>;
$curpos = tell($myhandle)) {
# search for some stuff and put it into files
print $curpos."\n";
}
sleep(1);
seek(FILE, $curpos, 0);
}
Like I said - it works for me. Changes to your script are minimal - just minimal cleanup.
Script: test_tail.pl
#!/usr/bin/perl
sub tail_file {
my $filename = shift;
open(my $myhandle,'<',$filename) or die "unable to open file"; # typical open call
for (;;) {
print "About to read file...\n";
while (<$myhandle>) {
chomp;
print $_."\n";
}
sleep 1;
seek $myhandle, 0, 1; # this clears the eof flag on FH
}
}
tail_file('/tmp/test_file.txt');
Then:
echo -e "aaa\nbbb\nccc\n" > /tmp/test_file.txt
# wait a bit
echo -e "ddd\neee\n" >> /tmp/test_file.txt
Meanwhile (in a different terminal);
$ perl /tmp/test_tail.pl
About to read file...
aaa
bbb
ccc
About to read file...
About to read file...
About to read file...
ddd
eee
Instead of this:
seek $myhandle, 0, 1; # this clears the eof flag on FH
Can you try something like this:
my $pos = tell $myhandle;
seek $myhandle, $pos, 0; # reset the file handle in an alternate way
The file system is trying to give you a consistent view of the file you are reading. To see the changes, you would need to reopen the file.
To see an example of this, try the following:
1.Create a file that has 100 lines of text in it, a man page, for instance:
man tail > foo
2.Print the file slowly:
cat foo | perl -ne 'print; sleep 1;'
3.While that is going on, in another shell or editor, try editing the file by deleting most lines
Result: The file will continue to print slowly, as if you never edited it. Only when you try to print it again, will you see the changes.
The following would also work:
my $TAIL = '/usr/bin/tail -f'; # Adjust accordingly
open my $fh, "$TAIL |"
or die "Unable to run $TAIL : $!";
while (<$fh>) {
# do something
}
This question already has answers here:
Replacing text in a file from a list in another file?
(4 answers)
Closed 9 years ago.
I'm not even sure if this is possible, but sure am hoping that it is.
I have this line 766 times in the file backup.xml:
*** Hosting Services
I then have the file list.txt which contains 766 lines in it. I need to replace *** with the contents of each of the 766 lines in list.txt - and it needs to be in the same order if at all possible.
Thanks in advance for any help!
Idea:
loop over the lines of the B(ackup file)
if you (F)ind a B-line to change
read the next line of the L(ist file)
change
print the line to R(result file)
Plan:
read_open B
read_open L
write_open R
while (line from B)
if (F) {
read replacemment from L
change line
}
print line to R
}
close R, L, B
Implementation I (read_open, loop, look at B):
use strict;
use warnings;
use English qw(-no_match_vars);
my $bfn = '../data/AA-backup-xml';
open my $bfh, '<', $bfn or die "Can't read open '$bfn': $OS_ERROR";
while (my $line = <$bfh>) {
print $line;
}
close $bfh or die "Can't read close '$bfn': $OS_ERROR";
output:
perl 01.pl
whatever
whatever
*** Hosting Services
whatever
whatever
whatever
*** Hosting Services
whatever
whatever
*** Hosting Services
whatever
whatever
whatever
*** Hosting Services
Implementation II (read/write, F, replace, first result):
use Modern::Perl;
use English qw(-no_match_vars);
my $bfn = '../data/AA-backup-xml';
open my $bfh, '<', $bfn or die "Can't read open '$bfn': $OS_ERROR";
my $lfn = '../data/AA-list.txt';
open my $lfh, '<', $lfn or die "Can't read open '$lfn': $OS_ERROR";
my $rfn = '../data/AA-result';
open my $rfh, '>', $rfn or die "Can't write open 'rlfn': $OS_ERROR";
while (my $line = <$bfh>) {
if ($line =~ /\*{3}/) {
my $rpl = <$lfh>;
$rpl = substr($rpl, 0, 3);
$line =~ s/\*{3}/$rpl/;
}
print $rfh $line;
}
close $rfh or die "Can't write close '$rfn': $OS_ERROR";
close $lfh or die "Can't read close '$lfn': $OS_ERROR";
close $bfh or die "Can't read close '$bfn': $OS_ERROR";
output:
type ..\data\AA-result
whatever
whatever
001 Hosting Services
whatever
whatever
whatever
002 Hosting Services
whatever
whatever
003 Hosting Services
whatever
whatever
whatever
004 Hosting Services
If this does not 'work' for you (perhaps I mis-guessed the structur of B or the F strategy is too naive), then publish a representative sample of B, L, and R.
You can use Tie::File to look at/modify files through representing them as arrays, ie:
use strict;
use warnings;
use Tie::File;
tie my #ra1, 'Tie::File', "test.txt" or die; #*** Hosting Services
tie my #ra2, 'Tie::File', "test1.txt" or die; #stuff you want to replace *** with
#assumes equal length
for (my $i=0; $i <= $#ra1; $i++)
{
my $temp=$ra2[$i];
$ra1[$i]=~s/(\*{3})/$temp/;
}
untie #ra1;
untie #ra2;
The above code replaces *** with the corresponding line of your list file. By writing $ra1[$i]=~s/(\*{3})/$temp/ we are directly changing the file that #ra1 is tied to.
#ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
SET /a linenum=0
(
FOR /f "delims=" %%i IN ('findstr /n "$" ^<backup.xml') DO (
SET "line=%%i"
SET "line=!line:*:=!"
IF DEFINED line (
IF "!line!"=="*** Hosting Services" (
SET /a linenum+=1
FOR /f "tokens=1*delims=:" %%r IN ('findstr /n "$" ^<list.txt') DO (
IF !linenum!==%%r (ECHO(%%s Hosting Services)
)
) ELSE (ECHO(!line!)
) ELSE (ECHO()
)
)>new.xml
GOTO :eof
IIUC, this should replace each "*** Hosting Services" line from backup.xml with line corresponding from list.txt *** Hosting Services, creating a new file new.xml
It's pretty short and sweet in awk:
awk '
NR == FNR {list[FNR]=$0; next}
/\*\*\* Hosting Services/ {sub(/\*\*\*/, list[++count])}
{print}
' list.txt backup.xml > new_backup.xml
a2p turns that into
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
line: while (<>) {
chomp; # strip record separator
if ($. == ($.-$FNRbase)) {
$list{($.-$FNRbase)} = $_;
next line;
}
if (/\*\*\* Hosting Services/) {
($s_ = '"'.($list{++$count}).'"') =~ s/&/\$&/g, s/\*\*\*/eval $s_/e;
}
print $_;
}
continue {
$FNRbase = $. if eof;
}
I have a simple log file which is very messy and I need it to be neat. The file contains log headers, but they are all jumbled up together. Therefore I need to sort the log files according to the log headers. There are no static number of lines - that means that there is no fixed number of lines for the each header of the text file. And I am using perl grep to sort out the headers.
The Log files goes something like this:
Car LogFile Header
<text>
<text>
<text>
Car LogFile Header
<text>
Car LogFile Header
<and so forth>
I have done up/searched a simple algorithm but it does not seem to be working. Can someone please guide me? Thanks!
#!/usr/bin/perl
#use 5.010; # must be present to import the new 5.10 functions, notice
#that it is 5.010 not 5.10
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(FH, $srce);
my #buf = <FH>;
close(FH);
my #lines = grep (/$string1/, #buffer);
After executing the code, there is no result shown at the terminal. Any ideas?
I think you want something like:
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open my $fh, '<', $srce or die "Could not open $srce: $!";
my #lines = sort grep /\Q$string1/, <$fh>;
print #lines;
Make sure you have the right file path and that the file has lines that match your test pattern.
It seems like you are missing a lot of very basic concepts and maybe cutting and paste code you see elsewhere. If you're just starting out, pick up a Perl tutorial such as Learning Perl. There are other books and references listed in perlfaq2.
Always use:
use strict;
use warnings;
This would have told you that #buffer is not defined.
#!/usr/bin/perl
use strict;
use warnings;
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(my $FH, $srce) or die "Failed to open file $srce ($!)";
my #buf = <$FH>;
close($FH);
my #lines = grep (/$string1/, #buf);
print #lines;
Perl is tricky for experts, so experts use the warnings it provides to protect them from making mistakes. Beginners need to use the warnings so they don't make mistakes they don't even know they can make.
(Because you didn't get a chance to chomp the input lines, you still have newlines at the end so the print prints the headings one per line.)
I don't think grep is what you want really.
As you pointed out in brian's answer, the grep will only give you the headers and not the subsequent lines.
I think you need an array where each element is the header and the subsequent lines up to the next header.
Something like: -
#!/usr/bin/perl
use strict;
use warnings;
my $srce = "./default.log";
my $string1 = "Car LogFile Header";
my #logs;
my $log_entry;
open(my $FH, $srce) or die "Failed to open file $srce ($!)";
my $found = 0;
while(my $buf = <$FH>)
{
if($buf =~ /$string1/)
{
if($found)
{
push #logs, $log_entry;
}
$found = 1;
$log_entry = $buf;
}
else
{
$log_entry = $log_entry . $buf;
}
}
if($found)
{
push #logs, $log_entry;
}
close($FH);
print sort #logs;
i think it's what is being asked for.
Perl grep is not same as Unix grep command in that it does not print anything on the screen.
The general syntax is: grep Expr, LIST
Evaluates Expr for each element of LIST and returns a list consisting of those elements for which the expression evaluated to true.
In your case all the #buffer elements which have the vale of $string1 will be returned.
You can then print the #buffer array to actually see them.
You just stored everything in an array instead of printing it out. It's also not necessary to keep the whole file in memory. You can read and print the match results line by line, like this:
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(FH, $srce);
while(my $line = <FH>) {
if($line =~ m/$string1/) {
print $line;
}
}
close FH;
Hello I found a way to extract links from html file
!/usr/bin/perl -w
2
3 # Links graber 1.0
2
3 # Links graber 1.0
4 #Author : peacengell
5 #28.02.13
6
7 ####
8
9 my $file_links = "links.txt";
10 my #line;
11 my $line;
12
13
14 open( FILE, $file_links ) or die "Can't find File";
15
16 while (<FILE>) {
17 chomp;
18 $line = $_ ;
19
20 #word = split (/\s+/, $line);
21 #word = grep(/href/, #word);
22 foreach $x (#word) {
23
24 if ( $x =~ m /ul.to/ ){
25 $x=~ s/href="//g;
26 $x=~s/"//g;
27 print "$x \n";
28
29
30 }
31
32 }
33
34 }
you can use it and modify it please let me know if you modify it.
I want to print certain lines from a text file in Unix. The line numbers to be printed are listed in another text file (one on each line).
Is there a quick way to do this with Perl or a shell script?
Assuming the line numbers to be printed are sorted.
open my $fh, '<', 'line_numbers' or die $!;
my #ln = <$fh>;
open my $tx, '<', 'text_file' or die $!;
foreach my $ln (#ln) {
my $line;
do {
$line = <$tx>;
} until $. == $ln and defined $line;
print $line if defined $line;
}
$ cat numbers
1
4
6
$ cat file
one
two
three
four
five
six
seven
$ awk 'FNR==NR{num[$1];next}(FNR in num)' numbers file
one
four
six
You can avoid the limitations of the some of the other answers (requirements for sorted lines), simply by using eof within the context of a basic while(<>) block. That will tell you when you've stopped reading line numbers and started reading data. Note that you need to reset $. when the switch occurs.
# Usage: perl script.pl LINE_NUMS_FILE DATA_FILE
use strict;
use warnings;
my %keep;
my $reading_line_nums = 1;
while (<>){
if ($reading_line_nums){
chomp;
$keep{$_} = 1;
$reading_line_nums = $. = 0 if eof;
}
else {
print if exists $keep{$.};
}
}
cat -n foo | join foo2 - | cut -d" " -f2-
where foo is your file with lines to print and foo2 is your file of line numbers
Here is a way to do this in Perl without slurping anything so that the memory footprint of the program is independent of the sizes of both files (it does assume that the line numbers to be printed are sorted):
#!/usr/bin/perl
use strict; use warnings;
use autodie;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
open my $src_h, '<', $src_file;
open my $filter_h, '<', $filter_file;
my $to_print = <$filter_h>;
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
close $filter_h;
close $src_h;
Generate the source file:
C:\> perl -le "print for aa .. zz" > src
Generate the filter file:
C:\> perl -le "print for grep { rand > 0.75 } 1 .. 52" > filter
C:\> cat filter
4
6
10
12
13
19
23
24
28
44
49
50
Output:
C:\> f src filter
ad
af
aj
al
am
as
aw
ax
bb
br
bw
bx
To deal with an unsorted filter file, you can modified the while loop:
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. > $to_print ) {
seek $src_h, 0, 0;
$. = 0;
}
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
This would waste a lot of time if the contents of the filter file are fairly random because it would keep rewinding to the beginning of the source file. In that case, I would recommend using Tie::File.
I wouldn't do it this way with large files, but (untested):
open(my $fh1, "<", "line_number_file.txt") or die "Err: $!";
chomp(my #line_numbers = <$fh1>);
$_-- for #line_numbers;
close $fh1;
open(my $fh2, "<", "text_file.txt") or die "Err: $!";
my #lines = <$fh2>;
print #lines[#line_numbers];
close $fh2;
I'd do it like this:
#!/bin/bash
numbersfile=numbers
datafile=data
while read lineno < $numbersfile; do
sed -n "${lineno}p" datafile
done
Downside to my approach is that it will spawn a lot of processes so it will be slower than other options. It's infinitely more readable though.
This is a short solution using bash and sed
sed -n -e "$(cat num |sed 's/$/p/')" file
Where num is the file of numbers and file is the input file ( Tested on OS/X Snow leopard)
$ cat num
1
3
5
$ cat file
Line One
Line Two
Line Three
Line Four
Line Five
$ sed -n -e "$(cat num |sed 's/$/p/')" file
Line One
Line Three
Line Five
$ cat input
every
good
bird
does
fly
$ cat lines
2
4
$ perl -ne 'BEGIN{($a,$b) = `cat lines`} print if $.==$a .. $.==$b' input
good
bird
does
If that's too much for a one-liner, use
#! /usr/bin/perl
use warnings;
use strict;
sub start_stop {
my($path) = #_;
open my $fh, "<", $path
or die "$0: open $path: $!";
local $/;
return ($1,$2) if <$fh> =~ /\s*(\d+)\s*(\d+)/;
die "$0: $path: could not find start and stop line numbers";
}
my($start,$stop) = start_stop "lines";
while (<>) {
print if $. == $start .. $. == $stop;
}
Perl's magic open allows for creative possibilities such as
$ ./lines-between 'tac lines-between|'
print if $. == $start .. $. == $stop;
while (<>) {
Here is a way to do this using Tie::File:
#!/usr/bin/perl
use strict; use warnings;
use autodie;
use Tie::File;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
tie my #source, 'Tie::File', $src_file, autochomp => 0
or die "Cannot tie source '$src_file': $!";
open my $filter_h, '<', $filter_file;
while ( my $to_print = <$filter_h> ) {
print $source[$to_print - 1];
}
close $filter_h;
untie #source;