Command line argument variable #ARGV - perl

I have a task to convert script from Perl to PowerShell.
I read about command line argument of Perl: #ARGV. I understand that at the time of the script execution, any argument that is passed will be captured by this special array variable. We can read #ARGV and assign values to scalar variables using:
($var1,$var2) = #ARGV;
I need to understand what the statement below is doing:
($var1,$var2,#ARGV) = #ARGV;
In my script, I have an if condition on values in #ARGV, and based on #ARGV values, a respective subroutine is getting called.
As per my understanding if we have more than two values in #ARGV, then on left side in parenthesis statement is changing values of ARGV/used to rewrite #ARGV with remaining Values?

It chops off the first two arguments from #ARGV and puts them in $var1 and $var2.
Personally I would have written it as:
$var1 = shift #ARGV;
$var2 = shift #ARGV;
But it is a matter of taste.

Related

Filehandle stored in hash variable reading as GLOB [duplicate]

Code
$ cat test1
hello
i am
lazer
nananana
$ cat 1.pl
use strict;
use warnings;
my #fh;
open $fh[0], '<', 'test1', or die $!;
my #res1 = <$fh[0]>; # Way1: why does this not work as expected?
print #res1."\n";
my $fh2 = $fh[0];
my #res2 = <$fh2>; # Way2: this works!
print #res2."\n";
Run
$ perl 1.pl
1
5
$
I am not sure why Way1 does not work as expected while Way2 does. Aren't those two methods the same? What is happening here?
Because of the dual nature of the <> operator (i.e. is it glob or readline?), the rules are that to behave as readline, you can only have a bareword or a simple scalar inside the brackets. So you'll have to either assign the array element to a simple scalar (as in your example), or use the readline function directly.
Because from perlop:
If what's within the angle brackets is neither a filehandle nor a simple scalar variable containing a filehandle name, typeglob, or typeglob reference, it is interpreted as a filename pattern to be globbed, and either a list of filenames or the next filename in the list is returned, depending on context. This distinction is determined on syntactic grounds alone. That means <$x> is always a readline() from an indirect handle, but <$hash{key}> is always a glob().
You can spell the <> operator as readline instead to avoid problems with this magic.
Anything more complex than a bareword (interpreted as a file handle) or a simple scalar $var is interpreted as an argument to the glob() function. Only barewords and simple scalars are treated as file handles to be iterated by the <...> operator.
Basically the rules are:
<bareword> ~~ readline bareword
<$scalar> ~~ readline $scalar
<$array[0]> ~~ glob "$array[0]"
<anything else> ~~ glob ...
It's because <$fh[0]> is parsed as glob($fh[0]).
Use readline instead:
my #res1 = readline($fh[0]);

Using perl command line arguments in the format of both while(<>) ARGV[1], possibly using a shift command?

I basically want to the do something like
while(<>){
my ($one, $two, $three) = split;
if ($one > ARGV[1]){
#some commands
}
}
Where I would invoke it like
./script.pl text.txt 50
But obviously I don't want the while loop to read anything from 50
Any ideas on the best cleanest way to do this, like if I could shift the command line arguments somehow
<> reads from the ARGV filehandle, which you can think of as a concatenation of all the filenames in #ARGV. The ARGV filehandle won't be initialized until <> is called, so it is safe to manipulate #ARGV before your while loop.
my $val = pop #ARGV; # take last argument
my ($val) = splice #ARGV, 1, 1; # take 2nd argument
...
while (<>) { # now ARGV fh uses whatever is currently in #ARGV
my ($one,$two,$three) = split;
if ($one > $val) { ... }
}
Also note that if #ARGV is empty, the <> operator will read from standard input. So long as you empty the #ARGV array before you try to read from <>, something like this will also work with <>:
./script.pl 50 < text.txt
The typical way this is done, is to assign #ARGV to a list of variables so the script documents what you were expecting to be passed
my ($file, $count) = #ARGV ;
Of course, this doesn't actually check the validity of what - if anything - is actually passed. If you want that, there are many options processing modules to choose from. Many like Getopt::Long but I prefer Getopt::Lucid. YMMV.
As with most things perl, Gabor Szabo has a great page about #ARGV here. You may find this quote from it useful:
How to extract the command line arguments from #ARGV
#ARGV is just a regular array in Perl. The only difference from arrays that you create, is that it does not need to be declared and it is populated by Perl when your script starts.
Aside from these issue, you can handle it as a regular array. You can go over the elements using foreach, or access them one by one using an index: $ARGV[0].
You can also use shift, unshift, pop or push on this array.
Indeed, not only can you fetch the content of #ARGV, you can also change it.
If you expect a single value on the command line you can check what was it, or if it was provided at all by looking at $ARGV[0]. If you expect two variables you will also check $ARGV[1].
I recommend you rearrange the order of the arguments
./script.pl 50 text.txt
Putting the file name(s) last is a more common practice, and it simplifies the needed code to the following:
my $limit = shift(#ARGV);
while (<>) {
my #fields = split;
if ($fields[0] > $limit) {
...
}
}
The trick is to remove all but the file names from #ARGV before while (<>).
This practice has the additional advantage that the following will simply read from STDIN:
./script.pl 50

Difference between "printf" and "print sprintf"

The following two simple perl programs have different behaviors:
#file1
printf #ARGV;
#file2
$tmp = sprintf #ARGV;
print $tmp;
$> perl file1 "hi %04d %.2f" 5 7.12345
#output: hi 0005 7.12
$> perl file2 "hi %04d %.2f" 5 7.12345
#output: 3
Why is the difference? I had thought the two programs are equivalent. Wonder if there is a way to make file2 (using "sprintf") to behave like file1.
The builtin sprintf function has a prototype:
$ perl -e 'print prototype("CORE::sprintf")'
$#
It treats the first argument as a scalar. Since you provided the argument #ARGV, it was coerced into a scalar by passing the number of elements in #ARGV instead.
Since the printf function has to support the syntax printf HANDLE TEMPLATE,LIST as well as printf TEMPLATE,LIST, it cannot support a prototype. So it always treats its arguments as a flat list, and uses the first element in the list as the template.
One way to make it the second script work correctly would be to call it like
$tmp = sprintf shift #ARGV, #ARGV
Another difference between printf and sprintf is that print sprintf appends $\ to the output, while printf does not (thanks, ysth).
#ARGV contains the arguments passed to the script in list form. printf takes that list and prints it out as is.
In second example you are using sprintf with the array and assigning it to scalar. Which basically means it stores the length of the array in your variable $tmp. Hence you get 3 as output.
From the perl docs (jaypal said it already)
Unlike printf, sprintf does not do what you probably mean when you pass it an array as your first argument. The array is given scalar context, and instead of using the 0th element of the array as the format, Perl will use the count of elements in the array as the format, which is almost never useful.

What happens internally when you have < FH >, <>, or < * > in perl?

I apologize if this question sounds simple, my intention is to understand in depth how this (these?) particular operator(s) works and I was unable to find a satisfactory description in the perldocs (It probably exists somewhere, I just couldn't find it for the life of me)
Particularly, I am interested in knowing if
a) <>
b) <*> or whatever glob and
c) <FH>
are fundamentally similar or different, and how they are used internally.
I built my own testing functions to gain some insight on this (presented below). I still don't have a full understanding (my understanding might even be wrong) but this is what I've concluded:
<>
In Scalar Context: Reads the next line of the "current file" being read (provided in #ARGV). Questions: This seems like a very particular scenario, and I wonder why it is the way it is and whether it can be generalized or not. Also what is the "current file" that is being read? Is it in a file handle? What is the counter?
In List Context: Reads ALL of the files in #ARGV into an array
<list of globs>
In Scalar Context: Name of the first file found in current folder that matches the glob. Questions: Why the current folder? How do I change this? Is the only way to change this doing something like < /home/* > ?
In List Context: All the files that match the glob in the current folder.
<FH> just seems to return undef when assigned to a variable.
Questions: Why is it undef? Does it not have a type? Does this behave similarly when the FH is not a bareword filehandle?
General Question: What is it that handles the value of <> and the others during execution? In scalar context, is any sort of reference returned, or are the variables that we assign them to, at that point identical to any other non-ref scalar?
I also noticed that even though I am assigning them in sequence, the output is reset each time. i.e. I would have assumed that when I do
$thing_s = <>;
#thing_l = <>;
#thing_l would be missing the first item, since it was already received by $thing_s. Why is this not the case?
Code used for testing:
use strict;
use warnings;
use Switch;
use Data::Dumper;
die "Call with a list of files\n" if (#ARGV<1);
my #whats = ('<>','<* .*>','<FH>');
my $thing_s;
my #thing_l;
for my $what(#whats){
switch($what){
case('<>'){
$thing_s = <>;
#thing_l = <>;
}
case('<* .*>'){
$thing_s = <* .*>;
#thing_l = <* .*>;
}
case('<FH>'){
open FH, '<', $ARGV[0];
$thing_s = <FH>;
#thing_l = <FH>;
}
}
print "$what in scalar context is: \n".Dumper($thing_s)."\n";
print "$what in list context is: \n".Dumper(#thing_l)."\n";
}
The <> thingies are all iterators. All of these variants have common behaviour:
Used in list context, all remaining elements are returned.
Used in scalar context, only the next element is returned.
Used in scalar context, it returns undef once the iterator is exhausted.
These last two properties make it suitable for use as a condition in while loops.
There are two kinds of iterators that can be used with <>:
Filehandles. In this case <$fh> is equivalent to readline $fh.
Globs, so <* .*> is equivalent to glob '* .*'.
The <> is parsed as a readline when it contains either nothing, a bareword, or a simple scalar. More complex expression can be embedded like <{ ... }>.
It is parsed as a glob in all other cases. This can be made explicit by using quotes: <"* .*"> but you should really be explicit and use the glob function instead.
Some details differ, e.g. where the iterator state is kept:
When reading from a file handle, the file handle holds that iterator state.
When using the glob form, each glob expression has its own state.
Another part is if the iterator can restart:
glob restarts after returning one undef.
filehandles can only be restarted by seeking – not all FHs support this operation.
If no file handle is used in <>, then this defaults to the special ARGV file handle. The behaviour of <ARGV> is as follows:
If #ARGV is empty, then ARGV is STDIN.
Otherwise, the elements of #ARGV are treated as file names. The following pseudocode is executed:
$ARGV = shift #ARGV;
open ARGV, $ARGV or die ...; # careful! no open mode is used
The $ARGV scalar holds the filename, and the ARGV file handle holds that file handle.
When ARGV would be eof, the next file from #ARGV is opened.
Only when #ARGV is completely empty can <> return undef.
This can actually be used as a trick to read from many files:
local #ARGV = qw(foo.txt bar.txt baz.txt);
while (<>) {
...;
}
What is it that handles the value of <> and the others during execution?
The Perl compiler is very context-aware, and often has to choose between multiple ambiguous interpretations of a code segment. It will compile <> as a call to readline or to glob depending on what is inside the brackets.
In scalar context, is any sort of reference returned, or are the variables that we assign them to, at that point identical to any other non-ref scalar?
I'm not sure what you're asking here, or why you think the variables that take the result of a <> should be any different from other variables. They are always simple string values: either a filename returned by glob, or some file data returned by readline.
<FH> just seems to return undef when assigned to a variable. Questions: Why is it undef? Does it not have a type? Does this behave similarly when the FH is not a bareword filehandle?
This form will treat FH as a filehandle, and return the next line of data from the file if it is open and not at eof. Otherwise undef is returned, to indicate that nothing valid could be read. Perl is very flexible with types, but undef behaves as its own type, like Ruby's nil. The operator behaves the same whether FH is a global file handle or a (variable that contains) a reference to a typeglob.

How can I mix command line arguments and filenames for <> in Perl?

Consider the following silly Perl program:
$firstarg = $ARGV[0];
print $firstarg;
$input = <>;
print $input;
I run it from a terminal like:
perl myprog.pl sample_argument
And get this error:
Can't open sample_argument: No such file or directory at myprog.pl line 5.
Any ideas why this is? When it gets to the <> is it trying to read from the (non-existent) file, "sample_argument" or something? And why?
<> is shorthand for "read from the files specified in #ARGV, or if #ARGV is empty, then read from STDIN". In your program, #ARGV contains the value ("sample_argument"), and so Perl tries to read from that file when you use the <> operator.
You can fix it by clearing #ARGV before you get to the <> line:
$firstarg = shift #ARGV;
print $firstarg;
$input = <>; # now #ARGV is empty, so read from STDIN
print $input;
See the perlio man page, which reads in part:
The null filehandle <> is special: it can be used to emulate the behavior of sed
and awk. Input from <> comes either from standard input, or from each file listed
on the command line. Here’s how it works: the first time <> is evaluated, the
#ARGV array is checked, and if it is empty, $ARGV[0] is set to "-", which when
opened gives you standard input. The #ARGV array is then processed as a list of
filenames.
If you want STDIN, use STDIN, not <>.
By default, perl consumes the command line arguments as input files for <>. After you've used them, you should consume them yourself with shift;