I want to print certain lines from a text file in Unix. The line numbers to be printed are listed in another text file (one on each line).
Is there a quick way to do this with Perl or a shell script?
Assuming the line numbers to be printed are sorted.
open my $fh, '<', 'line_numbers' or die $!;
my #ln = <$fh>;
open my $tx, '<', 'text_file' or die $!;
foreach my $ln (#ln) {
my $line;
do {
$line = <$tx>;
} until $. == $ln and defined $line;
print $line if defined $line;
}
$ cat numbers
1
4
6
$ cat file
one
two
three
four
five
six
seven
$ awk 'FNR==NR{num[$1];next}(FNR in num)' numbers file
one
four
six
You can avoid the limitations of the some of the other answers (requirements for sorted lines), simply by using eof within the context of a basic while(<>) block. That will tell you when you've stopped reading line numbers and started reading data. Note that you need to reset $. when the switch occurs.
# Usage: perl script.pl LINE_NUMS_FILE DATA_FILE
use strict;
use warnings;
my %keep;
my $reading_line_nums = 1;
while (<>){
if ($reading_line_nums){
chomp;
$keep{$_} = 1;
$reading_line_nums = $. = 0 if eof;
}
else {
print if exists $keep{$.};
}
}
cat -n foo | join foo2 - | cut -d" " -f2-
where foo is your file with lines to print and foo2 is your file of line numbers
Here is a way to do this in Perl without slurping anything so that the memory footprint of the program is independent of the sizes of both files (it does assume that the line numbers to be printed are sorted):
#!/usr/bin/perl
use strict; use warnings;
use autodie;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
open my $src_h, '<', $src_file;
open my $filter_h, '<', $filter_file;
my $to_print = <$filter_h>;
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
close $filter_h;
close $src_h;
Generate the source file:
C:\> perl -le "print for aa .. zz" > src
Generate the filter file:
C:\> perl -le "print for grep { rand > 0.75 } 1 .. 52" > filter
C:\> cat filter
4
6
10
12
13
19
23
24
28
44
49
50
Output:
C:\> f src filter
ad
af
aj
al
am
as
aw
ax
bb
br
bw
bx
To deal with an unsorted filter file, you can modified the while loop:
while ( my $src_line = <$src_h> ) {
last unless defined $to_print;
if ( $. > $to_print ) {
seek $src_h, 0, 0;
$. = 0;
}
if ( $. == $to_print ) {
print $src_line;
$to_print = <$filter_h>;
}
}
This would waste a lot of time if the contents of the filter file are fairly random because it would keep rewinding to the beginning of the source file. In that case, I would recommend using Tie::File.
I wouldn't do it this way with large files, but (untested):
open(my $fh1, "<", "line_number_file.txt") or die "Err: $!";
chomp(my #line_numbers = <$fh1>);
$_-- for #line_numbers;
close $fh1;
open(my $fh2, "<", "text_file.txt") or die "Err: $!";
my #lines = <$fh2>;
print #lines[#line_numbers];
close $fh2;
I'd do it like this:
#!/bin/bash
numbersfile=numbers
datafile=data
while read lineno < $numbersfile; do
sed -n "${lineno}p" datafile
done
Downside to my approach is that it will spawn a lot of processes so it will be slower than other options. It's infinitely more readable though.
This is a short solution using bash and sed
sed -n -e "$(cat num |sed 's/$/p/')" file
Where num is the file of numbers and file is the input file ( Tested on OS/X Snow leopard)
$ cat num
1
3
5
$ cat file
Line One
Line Two
Line Three
Line Four
Line Five
$ sed -n -e "$(cat num |sed 's/$/p/')" file
Line One
Line Three
Line Five
$ cat input
every
good
bird
does
fly
$ cat lines
2
4
$ perl -ne 'BEGIN{($a,$b) = `cat lines`} print if $.==$a .. $.==$b' input
good
bird
does
If that's too much for a one-liner, use
#! /usr/bin/perl
use warnings;
use strict;
sub start_stop {
my($path) = #_;
open my $fh, "<", $path
or die "$0: open $path: $!";
local $/;
return ($1,$2) if <$fh> =~ /\s*(\d+)\s*(\d+)/;
die "$0: $path: could not find start and stop line numbers";
}
my($start,$stop) = start_stop "lines";
while (<>) {
print if $. == $start .. $. == $stop;
}
Perl's magic open allows for creative possibilities such as
$ ./lines-between 'tac lines-between|'
print if $. == $start .. $. == $stop;
while (<>) {
Here is a way to do this using Tie::File:
#!/usr/bin/perl
use strict; use warnings;
use autodie;
use Tie::File;
#ARGV == 2
or die "Supply src_file and filter_file as arguments\n";
my ($src_file, $filter_file) = #ARGV;
tie my #source, 'Tie::File', $src_file, autochomp => 0
or die "Cannot tie source '$src_file': $!";
open my $filter_h, '<', $filter_file;
while ( my $to_print = <$filter_h> ) {
print $source[$to_print - 1];
}
close $filter_h;
untie #source;
Related
I would like to know of a fast/efficient way in any program (awk/perl/python) to split a csv file (say 10k columns) into multiple small files each containing 2 columns. I would be doing this on a unix machine.
#contents of large_file.csv
1,2,3,4,5,6,7,8
a,b,c,d,e,f,g,h
q,w,e,r,t,y,u,i
a,s,d,f,g,h,j,k
z,x,c,v,b,n,m,z
I now want multiple files like this:
# contents of 1.csv
1,2
a,b
q,w
a,s
z,x
# contents of 2.csv
1,3
a,c
q,e
a,d
z,c
# contents of 3.csv
1,4
a,d
q,r
a,f
z,v
and so on...
I can do this currently with awk on small files (say 30 columns) like this:
awk -F, 'BEGIN{OFS=",";} {for (i=1; i < NF; i++) print $1, $(i+1) > i ".csv"}' large_file.csv
The above takes a very long time with large files and I was wondering if there is a faster and more efficient way of doing the same.
Thanks in advance.
The main hold up here is in writing so many files.
Here is one way
use warnings;
use strict;
use feature 'say';
my $file = shift // die "Usage: $0 csv-file\n";
my #lines = do { local #ARGV = $file; <> };
chomp #lines;
my #fhs = map {
open my $fh, '>', "f${_}.csv" or die $!;
$fh
}
1 .. scalar( split /,/, $lines[0] );
for (#lines) {
my ($first, #cols) = split /,/;
say {$fhs[$_]} join(',', $first, $cols[$_])
for 0..$#cols;
}
I didn't time this against any other approaches. Assembling data for each file first and then dumping it in one operation into each file may help, but first let us know how large the original CSV file is.
Opening so many output files at once (for #fhs filehandles) may pose problems. If that is the case then the simplest way is to first assemble all data and then open and write a file at a time
use warnings;
use strict;
use feature 'say';
my $file = shift // die "Usage: $0 csv-file\n";
open my $fh, '<', $file or die "Can't open $file: $!";
my #data;
while (<$fh>) {
chomp;
my ($first, #cols) = split /,/;
push #{$data[$_]}, join(',', $first, $cols[$_])
for 0..$#cols;
}
for my $i (0..$#data) {
open my $fh, '>', $i+1 . '.csv' or die $!;
say $fh $_ for #{$data[$i]};
}
This depends on whether the entire original CSV file, plus a bit more, can be held in memory.
With your show samples, attempts; please try following awk code. Since you are opening files all together it may fail with infamous "too many files opened error" So to avoid that have all values into an array and in END block of this awk code print them one by one and I am closing them ASAP all contents are getting printed to output file.
awk '
BEGIN{ FS=OFS="," }
{
for(i=1;i<NF;i++){
value[i]=(value[i]?value[i] ORS:"") ($1 OFS $(i+1))
}
}
END{
for(i=1;i<=NF;i++){
outFile=i".csv"
print value[i] > (outFile)
close(outFile)
}
}
' large_file.csv
I needed the same functionality and wrote it in bash.
Not sure if it will be faster than ravindersingh13's answer, but I hope it will help someone.
Actual version: https://github.com/pgrabarczyk/csv-file-splitter
#!/usr/bin/env bash
set -eu
SOURCE_CSV_PATH="${1}"
LINES_PER_FILE="${2}"
DEST_PREFIX_NAME="${3}"
DEBUG="${4:-0}"
split_files() {
local source_csv_path="${1}"
local lines_per_file="${2}"
local dest_prefix_name="${3}"
local debug="${4}"
_print_log "source_csv_path: ${source_csv_path}"
local dest_prefix_path="$(pwd)/output/${dest_prefix_name}"
_print_log "dest_prefix_path: ${dest_prefix_path}"
local headline=$(awk "NR==1" "${source_csv_path}")
local file_no=0
mkdir -p "$(dirname ${dest_prefix_path})"
local lines_in_files=$(wc -l "${source_csv_path}" | awk '{print $1}')
local files_to_create=$(((lines_in_files-1)/lines_per_file))
_print_log "There is ${lines_in_files} lines in file. I will create ${files_to_create} files per ${lines_per_file} (Last file may have less)"
_print_log "Start processing."
for (( start_line=1; start_line<=lines_in_files; )); do
last_line=$((start_line+lines_per_file))
file_no=$((file_no+1))
local file_path="${dest_prefix_path}$(printf "%06d" ${file_no}).csv"
if [ $debug -eq 1 ]; then
_print_log "Creating file ${file_path} with lines [${start_line};${last_line}]"
fi
echo "${headline}" > "${file_path}"
awk "NR>${start_line} && NR<=${last_line}" "${source_csv_path}" >> "${file_path}"
start_line=$last_line
done
_print_log "Done."
}
_print_log() {
local log_message="${1}"
local date_time=$(date "+%Y-%m-%d %H:%M:%S.%3N")
printf "%s - %s\n" "${date_time}" "${log_message}" >&2
}
split_files "${SOURCE_CSV_PATH}" "${LINES_PER_FILE}" "${DEST_PREFIX_NAME}" "${DEBUG}"
Execution:
bash csv-file-splitter.sh "sample.csv" 3 "result_" 1
Tried a solution using the module Text::CSV.
#! /usr/bin/env perl
use warnings;
use strict;
use utf8;
use open qw<:std :encoding(utf-8)>;
use autodie;
use feature qw<say>;
use Text::CSV;
my %hsh = ();
my $csv = Text::CSV->new({ sep_char => ',' });
print "Enter filename: ";
chomp(my $filename = <STDIN>);
open (my $ifile, '<', $filename);
while (<$ifile>) {
chomp;
if ($csv->parse($_)) {
my #fields = $csv->fields();
my $first = shift #fields;
while (my ($i, $v) = each #fields) {
push #{$hsh{($i + 1).".csv"}}, "$first,$v";
}
} else {
die "Line could not be parsed: $_\n";
}
}
close($ifile);
while (my ($k, $v) = each %hsh) {
open(my $ifile, '>', $k);
say {$ifile} $_ for #$v;
close($ifile);
}
exit(0);
I am a beginner in perl, so please bear with me.
I have 2 files:
1
2
3
and
2
4
5
6
I want to create a new file that is the sum of the above 2 files:
output file:
3
6
8
6
What I am doing right now is reading the files as arrays and adding them element by element.
To add the arrays I am using the following:
$asum[#asum] = $array1[#asum] + $array2[#asum] while defined $array1[#asum] or defined $array2[#asum];
But this is giving the following error:
Argument "M-oM-;M-?3" isn't numeric in addition (+) at perl_ii.pl line 30.
Argument "M-oM-;M-?1" isn't numeric in addition (+) at perl_ii.pl line 30.
Use of uninitialized value in addition (+) at perl_ii.pl line 30.
I am using the following code to read files as arrays:
use strict;
use warnings;
my #array1;
open(my $fh, "<", "file1.txt") or die "Failed to open file1\n";
while(<$fh>) {
chomp;
push #array1, $_;
}
close $fh;
my #array2;
open(my $fh1, "<", "file2.txt") or die "Failed to open file2\n";
while(<$fh1>) {
chomp;
push #array2, $_;
}
close $fh1 ;
Anyone could tell me how to fix this, or give a better approach altogether?
Here is another Perl solution that makes use of the diamond, <>, file read operator. This reads in files specified on the command line, (rather than explicitly opening them within the program). Sorry, I can't find the part of the docs that explains this for a read.
The command line for this program would look like:
perl myprogram.pl file1 file2 > outputfile
Where file1 and file2 are the 2 input files and outputfile is the file you want to print the results of the addition.
#!/usr/bin/perl
use strict;
use warnings;
my #sums;
my $i = 0;
while (my $num = <>) {
$sums[$i++] += $num;
$i = 0 if eof;
}
print "$_\n" for #sums;
Note: $i is reset to zero at the end of file, (in this case after the first file is read). Actually, it is also reset to 0 after the second file is read. This has no effect on the program however, because there are no files to be read after the second file in your example.
The following solution makes the memory footprint of the program independent of the sizes of the files. Instead, now the memory footprint only depends on the number of files:
#!/usr/bin/env perl
use strict;
use warnings;
use Carp qw( croak );
use List::Util qw( sum );
use Path::Tiny;
run(#ARGV);
sub run {
my #readers = map make_reader($_), #_;
while (my #obs = grep defined, map $_->(), #readers) {
print sum(#obs), "\n";
}
return;
}
sub make_reader {
my $fname = shift;
my $fhandle = path( $fname )->openr;
my $is_readable = 1;
sub {
return unless $is_readable;
my $line = <$fhandle>;
return $line if defined $line;
close $fhandle
or croak "Cannot close '$fname': $!";
$is_readable = 0;
return;
}
}
You have two different problems with your script now:
First error
Argument "M-oM-;M-?3" isn't numeric in addition (+) at perl_ii.pl line
30
happens because your input files are saved in Unicode and first line is read with "\xFF\xFE" BOM bytes.
To fix it simply, just resave the files as ANSI text. If Unicode is required, then remove these bytes from first string you read from file.
Second error
Use of uninitialized value in addition (+) at perl_ii.pl line 30.
happens because you access 4th element in first array that doesn't exist. Remember, you select maximal input array length as index limit. To fix it just add following condition for input element:
$asum[#asum] = (#asum < #array1 ? $array1[#asum] : 0) + (#asum < #array2 ? $array2[#asum] : 0) while defined $array1[#asum] or defined $array2[#asum];
The logic of reading your two files is the same, and I suggest using a subroutine for that and calling it twice:
#!/usr/bin/env perl
use strict;
use warnings;
my #array1 = read_into_array('file1.txt');
my #array2 = read_into_array('file2.txt');
sub read_into_array
{
my $filename = shift;
my #array;
open(my $fh, "<", $filename) or die "Failed to open $filename: $!\n";
while(<$fh>) {
chomp;
push #array, $_;
}
close $fh;
return #array;
}
But that's just an observation I made and not a solution to your problem. As CodeFuller already said, you should re-save your files as plain ASCII instead of UTF-8.
The second problem, Use of uninitialized value in addition (+), can also be solved with the Logical Defined Or operator // which was introduced in Perl 5.10:
my #asum;
$asum[#asum] = ($array1[#asum] // 0)
+ ($array2[#asum] // 0)
while defined $array1[#asum] or defined $array2[#asum];
No, this is not a comment, but an operator very similar to ||. The difference is that it triggers when the left-hand-side (lhs) is undef while the || triggers when the lhs is falsy (i.e. 0, '' or undef). Thus
$array1[#asum] // 0
gives 0 if $array1[#asum] is undef. It's the same as
defined($array1[#asum]) ? $array1[#asum] : 0
A different approach altogether:
$ paste -d '+' file1 file2 | sed 's/^+//;s/+$//' | bc
3
6
8
6
The paste command prints the files next to each other, separated by a + sign:
$ paste -d '+' file1 file2
1+2
2+4
3+5
+6
The sed command removes leading and trailing + signs, because those trip up bc:
$ paste -d '+' file1 file2 | sed 's/^+//;s/+$//'
1+2
2+4
3+5
6
And bc finally calculates the sums.
Here’s a rendition of Sinan’s approach in a more Perlish form:
#!/usr/bin/env perl
use 5.010; use strict; use warnings;
use autodie;
use List::Util 'sum';
my #fh = map { open my $fh, '<', $_; $fh } #ARGV;
while ( my #value = grep { defined } map { scalar readline $_ } #fh ) {
say sum #value;
#fh = grep { not eof $_ } #fh if #value < #fh;
}
Input:
orf00007 PHAGE_Prochl_MED4_213_NC_020845-gi|472340344|ref|YP_007673870.1| 7665 8618 0.210897481636936
orf00007 PHAGE_Prochl_MED4_213_NC_020845-gi|472340344|ref|YP_007673870.1| 7665 8618 0.210897481636936
orf00007 PHAGE_Prochl_P_HM2_NC_015284-gi|326783200|ref|YP_004323597.1| 7665 8618 0.207761175236097
orf00015 PHAGE_Megavi_lba_NC_020232-gi|448825467|ref|YP_007418398.1| 11594 13510 0.278721920668058
orf00015 PHAGE_Acanth_moumouvirus_NC_020104-gi|441432357|ref|YP_007354399.1| 11594 13510 0.278721920668058
The script I had implemented:
use feature qw/say/;
use Math::Trig;
open FILE,"out02.txt";
my #file=<FILE>;
close FILE;
my $aa=0;
for(my $i=$aa; $i <=17822; $i++){
if (($file[$i]>=0.210)){
open(OUTFILE,'>>OUT_t10-t10.txt');
print OUTFILE $file[$i];
}
else{}
}
Note:
I need to take the last column as the analysing criteria to print the entire row(the float value, eg:0.210897481636936)
for example if the user input value is '0.210',it has to produce >= values rows ,the expected output is
Output:
orf00007 PHAGE_Prochl_MED4_213_NC_020845-gi|472340344|ref|YP_007673870.1| 7665 8618 0.210897481636936
orf00007 PHAGE_Prochl_MED4_213_NC_020845-gi|472340344|ref|YP_007673870.1| 7665 8618 0.210897481636936
orf00015 PHAGE_Megavi_lba_NC_020232-gi|448825467|ref|YP_007418398.1| 11594 13510 0.278721920668058
orf00015 PHAGE_Acanth_moumouvirus_NC_020104-gi|441432357|ref|YP_007354399.1| 11594 13510 0.278721920668058
A script like the following could work for you:
use strict;
use warnings;
use autodie;
die "Usage: $0 number file\n"
if #ARGV != 2;
my $minval = shift;
while (<>) {
my #cols = split;
print if $col[-1] >= $minval;
}
And execute it like:
perl yourscript.pl 0.210 out02.txt >> OUT_t10-t10.txt
Or using a perl one-liner:
perl -lane 'print if $F[-1] >= 0.210' out02.txt >> OUT_t10-t10.txt
Using awk:
awk -v value=0.210 '$NF >= value' file
Or
awk -v value=0.210 '$NF >= value' file > output_file
This scripts (inspired on yours) solves the problem:
use strict;
use warnings;
my $user_filter = 0.21;
open my $input_file, "<", "out02.txt" or die $!; # Modern way of open files
open my $output_file, ">>", "OUT_t10-t10.txtt" or die $!;
while( my $line=<$input_file> ) {
if( $line =~ / ([\d\.]+)\s*$/ ) {
# If a number was found at the end of line
if( $1 > $user_filter ) { # Check condition
print $output_file $line; #Write to output file
}
}
}
close $input_file;
close $output_file;
I have a simple log file which is very messy and I need it to be neat. The file contains log headers, but they are all jumbled up together. Therefore I need to sort the log files according to the log headers. There are no static number of lines - that means that there is no fixed number of lines for the each header of the text file. And I am using perl grep to sort out the headers.
The Log files goes something like this:
Car LogFile Header
<text>
<text>
<text>
Car LogFile Header
<text>
Car LogFile Header
<and so forth>
I have done up/searched a simple algorithm but it does not seem to be working. Can someone please guide me? Thanks!
#!/usr/bin/perl
#use 5.010; # must be present to import the new 5.10 functions, notice
#that it is 5.010 not 5.10
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(FH, $srce);
my #buf = <FH>;
close(FH);
my #lines = grep (/$string1/, #buffer);
After executing the code, there is no result shown at the terminal. Any ideas?
I think you want something like:
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open my $fh, '<', $srce or die "Could not open $srce: $!";
my #lines = sort grep /\Q$string1/, <$fh>;
print #lines;
Make sure you have the right file path and that the file has lines that match your test pattern.
It seems like you are missing a lot of very basic concepts and maybe cutting and paste code you see elsewhere. If you're just starting out, pick up a Perl tutorial such as Learning Perl. There are other books and references listed in perlfaq2.
Always use:
use strict;
use warnings;
This would have told you that #buffer is not defined.
#!/usr/bin/perl
use strict;
use warnings;
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(my $FH, $srce) or die "Failed to open file $srce ($!)";
my #buf = <$FH>;
close($FH);
my #lines = grep (/$string1/, #buf);
print #lines;
Perl is tricky for experts, so experts use the warnings it provides to protect them from making mistakes. Beginners need to use the warnings so they don't make mistakes they don't even know they can make.
(Because you didn't get a chance to chomp the input lines, you still have newlines at the end so the print prints the headings one per line.)
I don't think grep is what you want really.
As you pointed out in brian's answer, the grep will only give you the headers and not the subsequent lines.
I think you need an array where each element is the header and the subsequent lines up to the next header.
Something like: -
#!/usr/bin/perl
use strict;
use warnings;
my $srce = "./default.log";
my $string1 = "Car LogFile Header";
my #logs;
my $log_entry;
open(my $FH, $srce) or die "Failed to open file $srce ($!)";
my $found = 0;
while(my $buf = <$FH>)
{
if($buf =~ /$string1/)
{
if($found)
{
push #logs, $log_entry;
}
$found = 1;
$log_entry = $buf;
}
else
{
$log_entry = $log_entry . $buf;
}
}
if($found)
{
push #logs, $log_entry;
}
close($FH);
print sort #logs;
i think it's what is being asked for.
Perl grep is not same as Unix grep command in that it does not print anything on the screen.
The general syntax is: grep Expr, LIST
Evaluates Expr for each element of LIST and returns a list consisting of those elements for which the expression evaluated to true.
In your case all the #buffer elements which have the vale of $string1 will be returned.
You can then print the #buffer array to actually see them.
You just stored everything in an array instead of printing it out. It's also not necessary to keep the whole file in memory. You can read and print the match results line by line, like this:
my $srce = "./root/Desktop/logs/Default.log";
my $string1 = "Car LogFile Header";
open(FH, $srce);
while(my $line = <FH>) {
if($line =~ m/$string1/) {
print $line;
}
}
close FH;
Hello I found a way to extract links from html file
!/usr/bin/perl -w
2
3 # Links graber 1.0
2
3 # Links graber 1.0
4 #Author : peacengell
5 #28.02.13
6
7 ####
8
9 my $file_links = "links.txt";
10 my #line;
11 my $line;
12
13
14 open( FILE, $file_links ) or die "Can't find File";
15
16 while (<FILE>) {
17 chomp;
18 $line = $_ ;
19
20 #word = split (/\s+/, $line);
21 #word = grep(/href/, #word);
22 foreach $x (#word) {
23
24 if ( $x =~ m /ul.to/ ){
25 $x=~ s/href="//g;
26 $x=~s/"//g;
27 print "$x \n";
28
29
30 }
31
32 }
33
34 }
you can use it and modify it please let me know if you modify it.
I have a source text in a file and looking for a code that would take the second (or n-th - in general) row from this file and print to a seperate file.
Any idea how to do this?
You can do this natively in Perl with the flip-flop operator and the special variable $. (used internally by ..), which contains the current line number:
# prints lines 3 to 8 inclusive from stdin:
while (<>)
{
print if 3 .. 8;
}
Or from the command line:
perl -wne'print if 3 .. 8' < filename.txt >> output.txt
You can also do this without Perl with: head -n3 filename.txt | tail -n1 >> output.txt
You could always:
Read all of the file in and but it into one variable.
Split the variable at the newline and store in an array
Write the value at the index 1 (for the second row) or the n-1 position to the separate file
use like this script.pl > outfile (or >> outfile for append)
this uses lexical filehandles and 3 arg open which are preferred to global filehandles and 2 arg open.
#!/usr/bin/perl
use strict;
use warnings;
use English qw( -no_match_vars );
use Carp qw( croak );
my ( $fn, $line_num ) = #ARGV;
open ( my $in_fh, '<', "$fn" ) or croak "Can't open '$fn': $OS_ERROR";
while ( my $line = <$in_fh> ) {
if ( $INPUT_LINE_NUMBER == $line_num ) {
print "$line";
}
}
note: $INPUT_LINE_NUMBER == $.
here's a slightly improved version that handles arbitrary amounts of line numbers and prints to a file.
script.pl <infile> <outfile> <num1> <num2> <num3> ...
#!/usr/bin/perl
use strict;
use warnings;
use English qw( -no_match_vars );
use Carp qw( croak );
use List::MoreUtils qw( any );
my ( $ifn, $ofn, #line_nums ) = #ARGV;
open ( my $in_fh , '<', "$ifn" ) or croak "can't open '$ifn': $OS_ERROR";
open ( my $out_fh, '>', "$ofn" ) or croak "can't open '$ofn': $OS_ERROR";
while ( my $line = <$in_fh> ) {
if ( any { $INPUT_LINE_NUMBER eq $_ } #line_nums ) {
print { $out_fh } "$line";
}
}
I think this will do what you want:
line_transfer_script.pl:
open(READFILE, "<file_to_read_from.txt");
open(WRITEFILE, ">File_to_write_to.txt");
my $line_to_print = $ARGV[0]; // you can set this to whatever you want, just pass the line you want transferred in as the first argument to the script
my $current_line_counter = 0;
while( my $current_line = <READFILE> ) {
if( $current_line_counter == $line_to_print ) {
print WRITEFILE $current_line;
}
$current_line_counter++;
}
close(WRITEFILE);
close(READFILE);
Then you'd call it like: perl line_transfer_script.pl 2 and that would write the 2nd line from file_to_read_from.txt into file_to_write_to.txt.
my $content = `tail -n +$line $input`;
open OUTPUT, ">$output" or die $!;
print OUTPUT $content;
close OUTPUT;