How to store output of module avail command in perl? - perl

#!/depot/local/bin/perl5.8.0
my #data = `module avail icwbev_plus `;
print "Data Array : #data \n " ;
my $data1 = `module avail icwbev_plus `;
print "Data $data1 \n" ;
my $data2 = system (" module avail icwbev_plus ");
print "S Data $data2 "
Output :
Data Array :
Data
S Data -1
I am not getting why it is not storing output to a variable.
Please help me to solve this. Thanks in advance.

To quote from the documentation for system (Emphasis added):
The return value is the exit status of the program as returned by the wait call. To get the actual exit value, shift right by eight (see below). See also exec. This is not what you want to use to capture the output from a command; for that you should use merely backticks or qx//, as described in "`STRING`" in perlop. Return value of -1 indicates a failure to start the program or an error of the wait(2) system call (inspect $! for the reason).
That combined with the blank output of the other attempts suggests that this module command isn't present in your path when you try to execute it. (I suspect that if you followed best practices and included use warnings; you'd get one about using an undefined value when you try to print $data1)
Anyways, if this module command is present on the computer you're running your perl code on, try using the absolute path to it (my $data1 = qx!/foo/bar/module avail icwbev_plus!), or put the directory it's in in your path before running the script.

The module command is a shell alias or a function. Thus it cannot be called directly via a `` or a system call.
To get the output of an avail sub-command, you should call the modulecmd command which is called by the module shell alias/function.
To get the location of modulecmd on your system, type in a regular shell session type module which exposes the command called by the module shell alias/function.
The fully qualified path to the modulecmd command can then be used through a back-tick or a system call to get the result of an avail sub-command:
To get the output of a module avail command (in terse format to simplify parsing):
#!/depot/local/bin/perl5.8.0
my $data1 = `/usr/share/Modules/libexec/modulecmd.tcl perl avail --terse icwbev_plus 2>&1`;
print "Data $data1 \n"
Note the --terse format used to simplify result parsing. Also stderr is redirected to stdout to catch the actual output of the command (as modulecmd primarily uses stdout to output environment change commands).

module outputs to stderr, not stdout, which is not captured by qx/backticks. You can try:
`LMOD_REDIRECT=yes module avail ...`
See https://lmod.readthedocs.io/en/latest/040_FAQ.html

Related

I have a system call that somehow is causing malformed header from script Bad header

I have a Perl CGI program. myperlcgi.pl
Within this program I have the following:
my $retval = system('extprogram')
extprogram has a print statement within.
The output from extprogram is being included within myperlcgi.pl
I tried adding
> output.txt 2>&1
to system call but did not change anything.
How do I prevent output form extprogram being used in myperlcgi.pl.
Surprised that stdout from system call is being used in myperlcgi.pl
The system command just doesn’t give you complete control over capturing STDOUT and STDERR of the executed command.
Use backticks or open to execute the command instead. That will capture the STDOUT of the command’s execution. If the command also outputs to STDERR, then you can append 2>&1 to redirect STDERR to STDOUT for capture in backticks, like so:
my $output = `$command 2>&1`;
If you really need the native return status of the executed command, you can get that information using $? or ${^CHILD_ERROR_NATIVE}. See perldoc perlvar for details.
Another option is to use the IPC::Open3 Perl library, but I find that method to be overkill for most situations.

storing the output of a external command to a variable using bactticks in perl

I am usiing the following code to store the output of an external command in a variable. But when I print the error, I get nothing.
$error1 = `hadoop fs -copyFromLocal $src_dir $tgt_dir`;
print "$error1\n"; # --> prints nothing
The output of the command in ` ` is:
copyFromLocal: Cannot create file/user/file5._COPYING_. Name node is in safe mode.
Is there anything wrong in storing the output?
The backticks in Perl only capture standard output. If hadoop is sending the message to standard error instead, backticks won't capture it. See the perlfaq answer for How can I capture STDERR from an external command? for several ways to do it. The simplest is to redirect the standard error file descriptor into the standard output file descriptor with 2>&1:
$error1 = `hadoop fs -copyFromLocal $src_dir $tgt_dir 2>&1`;
Modules such as Capture::Tiny are very nice as well.

Copy output and extract number from it in perl

I am working on a perl script in which I will run a command and get a output like : your id is <895162>. I will store this string and read the number from this string only . The problem is my main command will run in shell using the system command from perl .
like :
#ids.csh is "echo your id is <1123221>"
my $p = system ("./ids.csh 2>&1 > /dev/null");
print "$p\n";
$p =~ s/[^0-9]//g;
but the output is not copying to the $p file , Where I am going wrong ?
system runs a command but doesn't capture it. For that, you want qx/backticks:
my $p = `./ids.csh 2>/dev/null`;
As Len Jaffe said, you probably want to throw away stderr output (rather than displaying it to your screen or wherever your stderr is going), but not stdout (that contains the message you want to capture).
Note that when qx fails, it can do so for several different reasons and constructing a meaningful error message is not trivial. If you run into problems, consider using IPC::System::Simple's capture() instead.
You have redirected all of the output to /dev/null, which means that all of your output is being discarded.
I think you probably mean:
./ids.csh 2>/dev/null
Which will redirect stderr to /dev/null while leaving stdout unchanged.

How to Redirect the Console I/O in a already opened file in PERL

I'm using system() function in PERL to execute commands. I want to redirect the console I/O of this function to a file which is already opened in my PERL script (see below). I know it is not possible to open the same file in PERL as well for re-directing however I need to print everything in a single file (Both my PERL script print statements and re-directed outputs) within the PERL script. Could any one please help me on this?
use strict;
use warnings;
open FPTR, ">Test.txt";
print FPTR "Executing Command1...\n";
system("Time >>Test.txt");
print FPTR "Executing Command2...\n";
system("Date >>Test.txt");
close FPTR;
Thanks,
Anand
STDOUT can be redirected to already open file handle FPTR. Going out of do block, STDOUT is restored back due dynamic scope set by local,
do {
local *STDOUT = \*FPTR;
print `date`;
};
From http://perldoc.perl.org/functions/local.html
A local modifies the listed variables to be local to the enclosing block, file, or eval.
Try using the backtick operator to execute the command and store it's output in a variable:
my $var = `time`;
As soon as you have the output stored you can simply append it to your file.
Hope that helps you.

How can I pass arguments from one Perl script to another?

I have a script which I run and after it's run it has some information that I need to pass to the next script to run.
The Unix/DOS commands are like so:
perl -x -s param_send.pl
perl -x -s param_receive.pl
param_send.pl is:
# Send param
my $send_var = "This is a variable in param_send.pl...\n";
$ARGV[0] = $send_var;
print "Argument: $ARGV[0]\n";
param_receive.pl is:
# Receive param
my $receive_var = $ARGV[0];
print "Parameter received: $receive_var";
But nothing is printed. I know I am doing it wrong but from the tutorials I can't figure out how to pass a parameter from one script to the next!
You can use a pipe character on the command line to connect stdout from the first program to stdin on the second program, which you can then write to (using print) or read from (using the <> operator).
perl param_send.pl | perl param_receive.pl
If you want the output of the first command to be the arguments to the second command, you can use xargs:
perl param_send.pl | xargs perl param_receive.pl
The %ENV hash in Perl holds the environment variables such as PATH, USER, etc. Any modifications to these variables is reflected 'only' in the current process and any child process that it may spawn. The parent process (which happens to be the shell in this particular instance) does not reflect these changes so when the 'param_send.pl' script ends all changes are lost.
For e.g. if you were to do something like,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
#!/usr/bin/perl
# param_recv.pl
print $ENV{'VAL'};
This wouldn't work since VAL is lost when param_send exits. One workaround is to call param_recv.pl from param_send.pl and pass the value as an environment variable or an argument,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
system( $^X, "param_recv.pl");
OR
#!/usr/bin/perl
# param_send.pl
system( $^X, qw(param_recv.pl 'VAL') );
Other options include piping the output or you could check out this Perlmonks node for a more esoteric solution.
#ARGV is created at runtime and does not persist. So your second script will not be able to see the $ARGV[0] you assigned in the first script. As crashmstr points out you either need to execute the second script from the first using one of the many methods for doing so. For example:
my $send_var = "This is a variable in param_send.pl...\n";
`perl param_receive.pl $send_var`;
or use an environment variable using %ENV:
my $send_var = "This is a variable in param_send.pl...\n";
$ENV['send_var'] = $send_var;
For a more advanced solutions think about using sockets or IPC.