Command line arguments in Perl - perl

I am working on an open source project for GSoC and I have this piece of Perl code with me. I need to create another Perl file for a similar task. However, I am having trouble understanding 3 lines of this file. More specifically, I am not able to understand why the files have $ symbol. I assume it is because they are command line arguments. However, I am not sure. I would like to have the meaning of these 3 lines explained to me.
open(NC, "|nc localhost 17001") || die "nc failed: $!\n";
print NC "scm hush\n(observe-text \"$_\")\n";
print "submit-one: $_\n";

$! and $_ are global variables. For more information you can read here
$_ The default input and pattern-searching space
$! If used in a numeric context, yields the current value of the errno variable, identifying the last system call error. If used in a string context, yields the corresponding system error string.
open(NC, "|nc localhost 17001") || die "nc failed: $!\n";
will run the command nc with the parameter and if it fails it will give you the error message.

Related

How to store output of module avail command in perl?

#!/depot/local/bin/perl5.8.0
my #data = `module avail icwbev_plus `;
print "Data Array : #data \n " ;
my $data1 = `module avail icwbev_plus `;
print "Data $data1 \n" ;
my $data2 = system (" module avail icwbev_plus ");
print "S Data $data2 "
Output :
Data Array :
Data
S Data -1
I am not getting why it is not storing output to a variable.
Please help me to solve this. Thanks in advance.
To quote from the documentation for system (Emphasis added):
The return value is the exit status of the program as returned by the wait call. To get the actual exit value, shift right by eight (see below). See also exec. This is not what you want to use to capture the output from a command; for that you should use merely backticks or qx//, as described in "`STRING`" in perlop. Return value of -1 indicates a failure to start the program or an error of the wait(2) system call (inspect $! for the reason).
That combined with the blank output of the other attempts suggests that this module command isn't present in your path when you try to execute it. (I suspect that if you followed best practices and included use warnings; you'd get one about using an undefined value when you try to print $data1)
Anyways, if this module command is present on the computer you're running your perl code on, try using the absolute path to it (my $data1 = qx!/foo/bar/module avail icwbev_plus!), or put the directory it's in in your path before running the script.
The module command is a shell alias or a function. Thus it cannot be called directly via a `` or a system call.
To get the output of an avail sub-command, you should call the modulecmd command which is called by the module shell alias/function.
To get the location of modulecmd on your system, type in a regular shell session type module which exposes the command called by the module shell alias/function.
The fully qualified path to the modulecmd command can then be used through a back-tick or a system call to get the result of an avail sub-command:
To get the output of a module avail command (in terse format to simplify parsing):
#!/depot/local/bin/perl5.8.0
my $data1 = `/usr/share/Modules/libexec/modulecmd.tcl perl avail --terse icwbev_plus 2>&1`;
print "Data $data1 \n"
Note the --terse format used to simplify result parsing. Also stderr is redirected to stdout to catch the actual output of the command (as modulecmd primarily uses stdout to output environment change commands).
module outputs to stderr, not stdout, which is not captured by qx/backticks. You can try:
`LMOD_REDIRECT=yes module avail ...`
See https://lmod.readthedocs.io/en/latest/040_FAQ.html

what is the meaning of -s in Perl and how does it work?

I have a perl program like
$var= "hello world";
$var = -s $var;
print $var;
When we print the value of $var, it shows a error like
Use of uninitialized value $var in print at line 3.
Can anyone explain how this works. What is the -s does? Is it a function? I couldn't find snyhing about it in perldoc.
The -s file test operator accepts either a file name string or a valid opened file handle, and returns the size of the file in bytes. If the file doesn't exist (I presume you have no file called hello world) then it returns undef
It is documented in perldoc -f -X
There is also a perl command-line switch -s which is unrelated. It is documented in perldoc perlrun. That is the documentation that you have found, but it is irrelevant to using -s within a Perl program
-s is one of many file tests available in Perl. This particular test returns file size in bytes, so it can be used to check if file is empty or not.
In your sample code the test returned undef, as it could not find file named hello world.
You can read more about file tests in Perl here: http://perldoc.perl.org/functions/-X.html
-s is an oddly named function documented in -X. But despite the dash in its name, -s is just like any other function.
-s returns the size of the file provided as an argument. On error, it returns undef and sets $!.
To find out what error you are getting, check if the size is undefined.
defined( my $size = -s $qfn )
or die("Can't stat \"$qfn\": $!\n");
In this case, it's surely because hello world isn't a path to a file.

How to read STDOUT from a sub-process in OO Perl

In Perl, one way to read the STDOUT of a subprocess is to use open:
open(PIPE, "ls -l |");
I was looking for a more object-oriented way to do this, though, and I've been using IO::Pipe with some success. I want to detect errors, though, specifically if the command is not executable. I can't figure out how to do that via IO::Pipe, though. Here's what I have:
use strict;
use warnings;
use IO::Pipe;
my($cmd) = join (" ", #ARGV);
open(PIPE, "$cmd |") || die qq(error opening PIPE);
while (<PIPE>) {
chomp;
print "DBG1: $_\n";
}
close PIPE;
my($pipe) = IO::Pipe->new();
$pipe->reader($cmd);
die qq(error opening IO::Pipe) if $pipe->eof();
while (<$pipe>) {
chomp;
print "DBG2: $_\n";
}
$pipe->close();
If the sub-process command is invalid, both checks will correctly die. If the sub-process produces no output, though, eof() will report an error, even if the command itself is fine:
$ perl pipe.pl "ls -l >/dev/null"
error opening IO::Pipe at pipe.pl line 20.
A bunch of questions, then:
Is there a reasonable OO way to read from a sub-process in Perl? Is IO::Pipe the correct tool to use? If so, how do I check to make sure the sub-process command is created successfully? If not, what should I be using? I don't want to write to the sub-process, so I don't think I want IPC::Open2 or IPC::Open3. I would prefer to use a core module, if possible.
The issue is not IO::Pipe. The problem is eof is the wrong way to check for a pipe error. It doesn't mean there's no pipe, it means there's nothing to read from that pipe. You'd have the same problem with eof PIPE. It's perfectly fine for a sub-process to not print anything.
If you want to check the sub-process successfully ran, it turns out IO::Pipe already does that for you.
# IO::Pipe: Cannot exec: No such file or directory
$pipe->reader("hajlalglagl");
Backticks is not a core module but seems to do what your looking for.
use strict;
use warnings;
use Backticks;
my $cmd = Backticks->new(join (" ", #ARGV));
$cmd->run();
if ($cmd->success){
print $cmd->stdout
} else {
print "Command failed\n";
}
Running this with a valid command then invalid command gives the below results
io_pipe.pl "uname -o"
GNU/Linux
io_pipe.pl "uname -z"
Command failed
Update
As pointed out by #thisSuitIsNotBlack, this module changes the deafult behaviour of backticks in perl. You should read the Notes section of the POD. However the main one to be aware of is:
The overriding of backticks is provided by Filter::Simple. Source
filtering can be weird sometimes... if you want to use this module in
a purely traditional Perl OO style, simply turn off the source
filtering as soon as you load the module:
use Backticks;
no Backticks;

How read/write into a named pipe in perl?

I have a script which have their input/output plugged to named pipes. I try to write something to the first named pipe and to read the result from the second named pipe but nothing happen.
I used open then open2 then sysopen whithout success :
sysopen(FH, "/home/Moses/enfr_kiid5/pipe_CGI_Uniform", O_RDWR);
sysopen(FH2, "/home/Moses/enfr_kiid5/pipe_Detoken_CGI", O_RDWR);
print FH "test 4242 test 4242" or die "error print";
doesn't made error but didn't work : i can't see trace of the print, the test sentence is not write into the first named pipe and try to read from the second block the process.
Works here.
$ mkfifo pipe
$ cat pipe &
$ perl -e 'open my $f, ">", "pipe"; print $f "test\n"'
test
$ rm pipe
You don't really need fancy sysopen stuff, named pipes are really supposed to behave like regular files, albeit half-duplex. Which happens to be a difference between your code and mine, worth investigating if you really need this opening pattern.
You may need to unbuffer your output after opening the pipe:
sysopen(...);
sysopen(...);
$old=select FH;
$|=1;
select $old;
print FH...
And, as friedo says, add a carriage return ("\n") to the end of your print statement!

"Null filename used" error

#!/usr/bin/perl
{
my $file = shift;
print $file;
require $file;
}
run as ./arg /root/perl/arg getting:
Null filename used at /root/perl/arg line 13.
Compilation failed in require at ./arg line 6.
But the file actually exists,why ??
You have to call your program with one command-line argument:
./getting myfilename
Otherwise you're trying to shift into a non-existent variable!
An alternative would be to refer to the argument directly and add a check:
my $num_args = $#ARGV + 1;
if ($num_args != 1)
{
print "Error!";
exit;
}
my $file = $ARGV[0];
Here's a minimal example code to reproduce your error messages. The actual error in not on the -e line, but in nullfn.pm. You're probably trying to use empty string (undef?) in require on line 13 of the included file (/root/perl/arg). The calling file (./arg) is OK.
-bash$ cat nullfn.pm
#!/usr/bin/perl -w
require "";
1;
-bash$ perl -we 'require nullfn;'
Null filename used at nullfn.pm line 3.
Compilation failed in require at -e line 1.
The problem is that you're doing 2 requires. You've assumed that the "Null filename" error is coming from the first one but its actually coming from the second one.
The first require is in the code you posted at line 6. It gets the value that you passed on the command line" "/root/perl/arg". The second require is in "/root/perl/arg" on line 13. This is not getting a value for some reason. When it gets no value it dies with a "Null filename" error. Then execution goes back to the require at line 6 and perl reports that "Compilation failed".
Here is a modified version of your code that explains what's happening as it goes:
$main::runcount++;
{
print "beginning run number $main::runcount\n";
print "\tARGV has ", scalar #ARGV, " arguments\n";
my $file = shift;
print "\tabout to require file `$file`\n";
require $file;
}
1;
And here's the output when I run it with itself as the only argument:
~$ perl arg arg
beginning run number 1
ARGV has 1 arguments
about to require file `arg`
beginning run number 2
ARGV has 0 arguments
about to require file ``
Null filename used at arg line 9.
Compilation failed in require at arg line 9.
From this its clear that the "Null filename" error is generated by the second require.
For fun I ran the script passing it's own name twice:
~$ perl arg arg arg
beginning run number 1
ARGV has 2 arguments
about to require file `arg`
beginning run number 2
ARGV has 1 arguments
about to require file `arg`
Here you can see that the second run of the script is able to get a value from #ARGV. However, since "arg" was already required we don't get a third run.
Another way i found it to work is to give the complete path to the package in the require statement.