Perl parse in command line input file - perl

I would like to parse in input file using command line. I ran as below but I am getting error (could not open filename) when I ran as below: Is my code wrong or what I type on the commandline is incorrect?
commandline> perl script.pl FILENAME1.TXT
Below is my code to parse in input file:
my $filename = <STDIN>;
open (my $file, '<', $filename) or die "could not open file '$filename': $!";
my $str = do {local $/; <$file>};
close $file;

You're trying to read $filename from standard input, when it's an argument to the program. You probably want something like
my $filename = $ARGV[0]

Perl's command line arguments show up in the variable #ARGV.
my( $filename ) = #ARGV;
However, Perl also has the special ARGV filehandle the opens the files you specify on the command line
while( <ARGV> ) { ... }
Even better, ARGV is the default filehandle:
while( <> ) { ... }
And, ARGV includes standard input if you didn't specify any arguments. That means that last while works in either of these calls:
% perl script.pl filename.txt
% perl script.pl < filename.txt
In your program, you read from STDIN, which is a different thing. That's standard input and is not related to the command line arguments. That's the data you send to the program after its running. For example, you might prompt for the filename:
print "Enter the filename: ";
my $filename = <STDIN>;
chomp( $filename );

Related

How to re-direct the contents to a file instead of printing it to terminal

How to avoid printing the contents to terminal (As it will take more time if it is 20k lines in my case) and instead redirect it to a file in perl?
This is just a sample and not the entire code:
if ($count eq $length)
{
push(#List,$line);
print "$line\n"; #Prints line to terminal which is time consuming
}
I tried below but it did not work
if ($cnt eq $redLen)
{
push(#List,$line);
print $line > "/home/vibes/text";
}
Please let me know if my question is not clear?
Simply use the 3 argument open method.
use strict;
use warnings;
my $line = "Hello Again!";
open (my $fh, ">", "/home/vibes/text") || die "Failed to open /home/vibes/text $!";
print $fh "$line\n";
close($fh); # Always close opened files.
The default filehandle in perl is STDOUT. You can change it with a call to select:
print "Hello\n"; # goes to stdout
open my $fh, '>', '/home/vibes/text';
select($fh);
print "World\n"; # goes to file '/home/vibes/text'
From your shell, output redirection is usually a matter of appending > file to your command. This is true in both Unix-y systems and on Windows.
$ perl my_script.pl > /home/vibes/text

Giving both input & output file at command line

Si I have this line in the perl script which prints the output to the STDOUT/console
printf "Line no. $i"
What code shall I include in the program to direct this output to an output file given by user at the command line itself (as undermentioned)
Right now ,the following portion asks the user for input file:
print "enter file name";
chomp(my $file=<STDIN>);
open(DATA,$file) or die "error reading";
But I dont want to ask the user for either of input/output file.
What I want is a way in which user could give in the input as well as output file from command line while running the program.
perl input_file output_file program.pl
What code shall i just include for this.
You can use shift to read the command line arguments to your script. shift reads and removes the first element of an array. If no array is specified (and not inside a subroutine), it will implicitly read from #ARGV, which contains the list of arguments passed to your script. For example:
use strict;
use warnings;
use autodie;
# check that two arguments have been passed
die "usage: $0 input output\n" unless #ARGV == 2;
my $infile = shift;
my $outfile = shift;
# good idea to sanitise the arguments here
open my $in, "<", $infile;
open my $out, ">", $outfile;
while (<$in>) {
print $out $_;
}
close $in;
close $out;
You could call this script like perl script.pl input_file output_file and it would copy the contents of input_file to output_file.
The easiest approach here is to ignore input and output files within your program. Just read from STDIN and write to STDOUT. Let the user redirect those filehandles when calling your program.
Your program looks something like this:
#!/usr/bin/perl
use strict;
use warnings;
while (<STDIN>) {
# do something useful to the data in $_
print;
}
And you call it like this:
$ ./your_program.pl inputfile.txt > outputfile.txt
This is known as the "Unix Filter Model" and it's the most flexible way to write programs that read input and produce output.
You can use #ARGV variable ,
use strict ;
use warnings ;
if ( #ARGV != 2 )
{
print "Usage : <program.pl> <input> <output>\n" ;
exit ;
}
open my $Input,$ARGV[0] or die "error:$!\n" ;
open my $Output,">>" .$ARGV[1] or die "error:$!\n";
print $Output $_ while (<$Input> ) ;
close ($Input) ;
close ($Output) ;
Note:
You should run the program perl program.pl input_file output_file this format.

Perl reading and writing in files

Alright, so I'm back with another question. I know in Python there is a way to read in a file without specifying which file it will be, until you are in the command prompt. So basically you can set the script up so that you can read in any file you want and don't have to go back and change the coding every time. Is there a way to do this in Perl? If so, can you write files like that too? Thanks.
This is what I have:
open (LOGFILE, "UNSUCCESSFULOUTPUT.txt") or die "Can't find file";
open FILE, ">", "output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
This is what I have nome:
#!/usr/local/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open FILE, ">>output.txt" or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
And it's still not appending...
Command line arguments are provided in #ARGV. You can do as you please with them, including passing them as file names to open.
my ($in_qfn, $out_qfn) = #ARGV;
open(my $in_fh, '<', $in_qfn ) or die $!;
open(my $out_fh, '>', $out_qfn) or die $!;
print $out_fh $_ while <$in_fh>;
But that's not a very unixy way of doing things. In unix tradition, the following will read from every file specified on the command line, one line at a time:
while (<>) {
...
}
Output is usually placed in files through redirection.
#!/usr/bin/env perl
# This is mycat.pl
print while <>;
# Example usage.
mycat.pl foo bar > baz
# Edit foo in-place.
perl -i mycat.pl foo
The only time one usually touches #ARGV is to process options, and even then, one usually uses Getopt::Long instead of touching #ARGV directly.
Regarding your code, your script should be:
#!/usr/bin/env perl
while (<>) {
print "ERROR in line $.\n" if /Error/;
}
Usage:
perl script.pl UNSUCCESSFULOUTPUT.txt >output.txt
You can get rid of perl from the command if you make script.pl executable (chmod u+x script.pl).
This is what I believe you want:
#!usr/bin/perl
my $argument1 = $ARGV[0];
open (LOGFILE, "<$argument1") or die "Can't find file";
open (FILE, ">output.txt") or die $!;
while(<LOGFILE>){
print FILE "ERROR in line $.\n" if (/Error/);
}
close FILE;
close LOGFILE;
Ran as from the command line:
> perl nameofpl.pl mytxt.txt
For appending change this line:
open (FILE, ">output.txt") or die $!;
To the remarkably similar:
open (FILE, ">>output.txt") or die $!;
I assume you are asking how to pass an argument to a perl script. This is done with the #ARGV variable.
use strict;
use warnings;
my $file = shift; # implicitly shifts from #ARGV
print "The file is: $file\n";
You can also make use of the magic of the diamond operator <>, which will open the arguments to the script as files, or use STDIN if no arguments are supplied. The diamond operator is used as a normal file handle, typically while (<>) ...
ETA:
With the code you supplied, you can make it more flexible by doing this:
use strict;
use warnings; # always use these
my $file = shift; # first argument, required
my $outfile = shift // "output.txt"; # second argument, optional
open my $log, "<", $file or die $!;
open my $out, ">", $outfile or die $!;
while (<$log>) {
print $out "ERROR in line $.\n" if (/Error/);
}
Also see ikegami's answer on how to make it more like other unix tools, e.g. accept STDIN or file arguments, and print to STDOUT.
As I commented in your earlier question, you may simply wish to use an already existing tool for the job:
grep -n Error input.txt > output.txt

Filehandle open() and the split variable

I am a beginner in Perl.
What I do not understand is the following:
To write a script that can:
Print the lines of the file $source with a comma delimiter.
Print the formatted lines to an output file.
Allow this output file to be specified in command-line.
Code:
my ( $source, $outputSource ) = #ARGV;
open( INPUT, $source ) or die "Unable to open file $source :$!";
Question: I do not understand how one can specify in the command line, upon starting to write the code the text of the output file.
I would rely on redirection operator in the shell instead, such as:
script.pl input.txt > output.txt
Then it is a simple case of doing this:
use strict;
use warnings;
while (<ARGV>) {
s/\n/,/;
print;
}
Then you can even merge several files with script.pl input1.txt input2.txt ... > output_all.txt. Or just do one file at the time, with one argument.
If I understood your question right I hope this example can help.
Program:
use warnings;
use strict;
## Check input and output file as arguments in command line.
die "Usage: perl $0 input-file output-file\n" unless #ARGV == 2;
my ( $source, $output_source ) = #ARGV;
## Open both files, one for reading and other for writing.
open my $input, "<", $source or
die "Unable to open file $source : $!\n";
open my $output, ">", $output_source or
die "Unable to open file $output_source : $!\n";
## Read all file line by line, substitute the end of line with a ',' and print
## to output file.
while ( my $line = <$input> ) {
$line =~ tr/\n/,/;
printf $output "%s", $line;
}
close $input;
close $output;
Execution:
$ perl script.pl infile outfile

Programmatically read from STDIN or input file in Perl

What is the slickest way to programatically read from stdin or an input file (if provided) in Perl?
while (<>) {
print;
}
will read either from a file specified on the command line or from stdin if no file is given
If you are required this loop construction in command line, then you may use -n option:
$ perl -ne 'print;'
Here you just put code between {} from first example into '' in second
This provides a named variable to work with:
foreach my $line ( <STDIN> ) {
chomp( $line );
print "$line\n";
}
To read a file, pipe it in like this:
program.pl < inputfile
The "slickest" way in certain situations is to take advantage of the -n switch. It implicitly wraps your code with a while(<>) loop and handles the input flexibly.
In slickestWay.pl:
#!/usr/bin/perl -n
BEGIN: {
# do something once here
}
# implement logic for a single line of input
print $result;
At the command line:
chmod +x slickestWay.pl
Now, depending on your input do one of the following:
Wait for user input
./slickestWay.pl
Read from file(s) named in arguments (no redirection required)
./slickestWay.pl input.txt
./slickestWay.pl input.txt moreInput.txt
Use a pipe
someOtherScript | ./slickestWay.pl
The BEGIN block is necessary if you need to initialize some kind of object-oriented interface, such as Text::CSV or some such, which you can add to the shebang with -M.
-l and -p are also your friends.
You need to use <> operator:
while (<>) {
print $_; # or simply "print;"
}
Which can be compacted to:
print while (<>);
Arbitrary file:
open my $F, "<file.txt" or die $!;
while (<$F>) {
print $_;
}
close $F;
If there is a reason you can't use the simple solution provided by ennuikiller above, then you will have to use Typeglobs to manipulate file handles. This is way more work. This example copies from the file in $ARGV[0] to that in $ARGV[1]. It defaults to STDIN and STDOUT respectively if files are not specified.
use English;
my $in;
my $out;
if ($#ARGV >= 0){
unless (open($in, "<", $ARGV[0])){
die "could not open $ARGV[0] for reading.";
}
}
else {
$in = *STDIN;
}
if ($#ARGV >= 1){
unless (open($out, ">", $ARGV[1])){
die "could not open $ARGV[1] for writing.";
}
}
else {
$out = *STDOUT;
}
while ($_ = <$in>){
$out->print($_);
}
Do
$userinput = <STDIN>; #read stdin and put it in $userinput
chomp ($userinput); #cut the return / line feed character
if you want to read just one line
Here is how I made a script that could take either command line inputs or have a text file redirected.
if ($#ARGV < 1) {
#ARGV = ();
#ARGV = <>;
chomp(#ARGV);
}
This will reassign the contents of the file to #ARGV, from there you just process #ARGV as if someone was including command line options.
WARNING
If no file is redirected, the program will sit their idle because it is waiting for input from STDIN.
I have not figured out a way to detect if a file is being redirected in yet to eliminate the STDIN issue.
if(my $file = shift) { # if file is specified, read from that
open(my $fh, '<', $file) or die($!);
while(my $line = <$fh>) {
print $line;
}
}
else { # otherwise, read from STDIN
print while(<>);
}