command output is directed to the screen - perl

novice Perl programmer here!.
I'm using the system() function to get the return code for an external program (in this case - php), however, the command's output is still printed to the screen.
How do I prevent it from doing so?
My code is:
use strict; use warnings;
print 'Return code:', system('php -l wrong.php'), "\n";
This code does print the return code, but it also print the the output of the command executed.
Any help will be appreciated!
EDIT: further testing shown that this happens while only using the php lint command.. using it with other commands doesnt print anything...

What you want is IPC::Open3:
use IPC::Open3;
use Symbol qw(gensym);
my $err = gensym; #work around IPC::Open3's interface
my $pid = open3 my $wtr, my $rdr, $err,
'some cmd', 'arg1', 'arg2', ...;
The separates out stdin into $wtr, stdout into $rdr, and stderr into $err
This also gives you maximum control over communicating with the process

If you are on a UNIX-like OS, you should be able to redirect the output in the command:
Try:
system('php -l wrong.php >> /dev/null') to get rid of what's being sent to stdout.
You could also open the command with a pipe to handle the output directly, which should be more portable.

Related

How to get STDERR in Perl from a command executed in pipe with su -c

I'm trying to capture the output of the command executed as a different user using:
my $command = qq(sudo su - <username> -c '/usr/bin/whatever');
my $pid = open $cmdOutput, "-|", $command;
How can I capture the STDERR of /usr/bin/whatever?
I tried
$pid = open $cmdOutput, "-|", $command || die " something went wrong: $!";
but it looks like this is capturing the possible errors of "open" itself.
I also tried
my $command = qq(sudo su - <username> -c '/usr/bin/whatever' 2>/tmp/error.message);
which will redirect the STDERR to the file, which I can parse later, but I wanted some more straightforward solution.
Also, I only want to use core modules.
This is covered thoroughly in perlfaq8. Since you are using a piped open, the relevant examples are those that go by open3 from the core IPC::Open3 module.
Another option is to use IPC::Run for managing your processes, and the pump function will do what you need. The IPC::Open3 documentation says for IPC::Run
This is a CPAN module that has better error handling and more facilities than Open3.
With either of these you can manipulate STDOUT and STDERR separately or together, as needed. For convenient and complete output capture also see Capture::Tiny.
Other than 2>output redirection, there are no more elementary methods for the piped open.
If you don't mind mixing the streams or losing STDOUT altogether, another option is
my $command = 'cmd 2>&1 1>/dev/null' # Remove 1>/dev/null to have both
my $pid = open my $cmdOutput, "-|", $command;
while (<$cmdOutput>) { print } # STDERR only
The first redirection merges STDERR stream with STDOUT so you get them both, and mixed (with STDOUT subject to buffering, thus things may well come out of order). The second redirect sends the STDOUT away so with it in place you read only the command's STDERR from the handle.
The question is about running an external command using open but I'd like to mention that the canonical and simple qx (backticks) can be used in the same way. It returns the STDOUT so redirection just like above is needed to get STDERR. For completeness:
my $cmd = 'cmd_to_execute';
my $allout = qx($cmd 2>&1); # Both STDOUT and STDERR in $out, or
my $stderr = qx($cmd 2>&1 1>/dev/null); # Only STDERR
my $exit_status = $?;
The qx puts the child process exit code (status) in $?. This can then be inspected for failure modes; see a summary in the qx page or a very thorough discussion in I/O operators in perlop.
Note that the STDERR returned this way is from the command, if it ran. If the command itself couldn't be run (for a typo in command name, or fork failed for some reason) then $? will be -1 and the error will be in $!.
As suggested by zdim I used the IPC::Open3 module for the matter and I've got something like this doing the job for me
$instanceCommand = qq(sudo su - <username> -c '<command>');
my ($infh,$outfh,$errfh,$pid);
$errfh = gensym();
$pid = open3($infh, $outfh, $errfh, $instanceCommand);
my $sel = new IO::Select;
$sel->add($outfh,$errfh);
while (my #ready = $sel->can_read){
foreach my $fh (#ready){
my $line =<$fh>;
if (not defined $line){
$sel->remove($fh);
next;
}
if ($fh == $outfh){
chomp($line);
#<----- command output processing ----->
}
elsif ($fh == $errfh){
chomp $line;
#<----- command error processing ----->
}
else {
die "Reading from something else\n";
}
}
}
waitpid($pid, 0);
Maybe not completely bullet proof, but its working fine for me. Even whilst executing funny cascaded script as < command > .
The desired destination, opened for writing, could be dup()'ed to FD #2

Redirect STDOUT to a file while process is waiting for user input

I am trying to redirect the STDOUT and STDERR into a log file, but I also want to print those streams to the console. I am using Perl, and my code looks like this:
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = capture {
system($command);
};
print $stdout;
print $stderr;
It works, but if the command waits for a user input, the program doesn't print $stdout to STDOUT until a key is pressed. Is there any way to print $stdout to STDOUT before it needs user input? Line by line approach would be fine.
Thank you in advance!
Well, I'm not familiar with Capture::Tiny so this may not be entirely relevant - generally though, if I'm looking to handle STDIN, STDOUT and/or STDERR then I look towards either open (if it's just one), or IPC::Open2 and [IPC::Open3][1] which open multiple file descriptors attached to a process.
use IPC::Open3;
$pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR,
'some cmd and args', 'optarg', ...);
use IPC::Open2;
$pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some', 'cmd', 'and', 'args');
Although I would suggest rather than the examples - you can use lexical filehandles:
my($chld_out, $chld_in);
$pid = open2($chld_out, $chld_in, 'some cmd and args');
You can then read and write from your filehandles (bear in mind though - by default a read will be blocking).
You do need to close and then (ideally) waitpid to clear up the process when you're done though.
You need to use Capture::Tiny's tee instead of capture.
The tee function works just like capture, except that output is captured as well as passed on to the original STDOUT and STDERR.
Just replace the function call and your output will end up both in the variables and on the screen.
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = tee {
system($command);
};
Simple approach that I could think of:
#! /usr/bin/perl -w
# Using perl one liner as a command here
# which prints a string to STDOUT and STDERR
my $cmd = "perl -e 'print STDOUT \"stdout\n\"; print STDERR \"stderr\n\";'";
my $log = "./test.log";
# using "2>&1" we are redirecting stderr also to stdout
system("$cmd 2>&1 | tee $log");
# Sample run results in both the strings getting printed to console as well as to log file
> perl test.pl
stderr
stdout
> cat test.log
stderr
stdout

stop console output from unix command execution in perl

I am running unix commands in perl as
$cmd="ls -l";
$result=`$cmd`;
print $log_file $result;
but the output from execution is also printed on screen.
how to execute command without printing result on screen ?
The standard output stream isn't printed to screen -- it's captured to $result. But there is a second output stream called the standard error stream that programs can write to, and which also by default goes to the screen. Many programs use this stream for logging, or (in the case of ls) for writing error messages. To capture this in addition, use
$cmd = "ls -l 2>&1";
To discard it instead, use
$cmd = "ls -l 2>/dev/null";
I think there's a mistake. Try that in a shell:
perl -e 'my $x = `ls -l`;'
That doesn't produce any output.
it happens probably because $log_file is somehow attached to STDOUT, duplicated to it, something like:
open(my $log_file, ">&STDOUT")

How to capture both STDOUT and STDERR in two different variables using Backticks in Perl

Let's say I want to run an external program from my script with backticks and at the same time I want to capture both STDOUT and STDERR but in two different variables. How can I do that? For istance if I run this script...
my $cmd = `snmpwalk -v $version -c $community $hostname $oid`;
...if there is no error everything works just fine BUT if the command raise an error this error will be printed on the command line and I don't want that to happen. I want to capture the error as well. Nothing has to be printed on the screen. Any ideas?
You needn't go all the way to open3, which IIRC is only for when you need to read and write to an external command, and even then there are other methods.
For your problem I suggest using Capture::Tiny, which can capture (or even tee) the STDOUT and STDERR from anything run inside its block. For example, per your question:
#!/usr/bin/env perl
use strict;
use warnings;
use Capture::Tiny qw/capture/;
...
my ($stdout, $stderr) = capture {
system ( "snmpwalk -v $version -c $community $hostname $oid" );
};
For another example consider this functioning code:
#!/usr/bin/env perl
use strict;
use warnings;
use Capture::Tiny qw/capture/;
my ($stdout, $stderr) = capture {
system ( "echo 'hello'" );
system ( "date" );
warn "Arg1!";
};
print "STDOUT:\n$stdout";
print "STDERR:\n$stderr";
which just gave me:
STDOUT:
hello
Mon Dec 19 23:59:06 CST 2011
STDERR:
Arg1! at ./test.pl line 11.
The only way to do this with backticks is to redirect to a file inside the shell command:
my $cmd = `snmpwalk -v $version -c $community $hostname $oid 2>error.dat`;
If you want to capture the STDERR inside your script, you need IPC::Open3 instead of backticks
In the Perl FAQ you have different options depending how do you want to proceed:
http://perldoc.perl.org/perlfaq8.html#How-can-I-capture-STDERR-from-an-external-command%3f
IO::CaptureOutput
is a very convenient wrapper for what you want to do.

How do you capture stderr, stdout, and the exit code all at once, in Perl?

Is it possible to run an external process from Perl, capture its stderr, stdout AND the process exit code?
I seem to be able to do combinations of these, e.g. use backticks to get stdout, IPC::Open3 to capture outputs, and system() to get exit codes.
How do you capture stderr, stdout, and the exit code all at once?
(Update: I updated the API for IO::CaptureOutput to make this even easier.)
There are several ways to do this. Here's one option, using the IO::CaptureOutput module:
use IO::CaptureOutput qw/capture_exec/;
my ($stdout, $stderr, $success, $exit_code) = capture_exec( #cmd );
This is the capture_exec() function, but IO::CaptureOutput also has a more general capture() function that can be used to capture either Perl output or output from external programs. So if some Perl module happens to use some external program, you still get the output.
It also means you only need to remember one single approach to capturing STDOUT and STDERR (or merging them) instead of using IPC::Open3 for external programs and other modules for capturing Perl output.
If you reread the documentation for IPC::Open3, you'll see a note that you should call waitpid to reap the child process. Once you do this, the status should be available in $?. The exit value is $? >> 8. See
$? in perldoc perlvar.
If you don't want the contents of STDERR, then the capture() command from IPC::System::Simple module is almost exactly what you're after:
use IPC::System::Simple qw(capture system $EXITVAL);
my $output = capture($cmd, #args);
my $exit_value = $EXITVAL;
You can use capture() with a single argument to invoke the shell, or multiple arguments to reliably avoid the shell. There's also capturex() which never calls the shell, even with a single argument.
Unlike Perl's built-in system and backticks commands, IPC::System::Simple returns the full 32-bit exit value under Windows. It also throws a detailed exception if the command can't be started, dies to a signal, or returns an unexpected exit value. This means for many programs, rather than checking the exit values yourself, you can rely upon
IPC::System::Simple to do the hard work for you:
use IPC::System::Simple qw(system capture $EXIT_ANY);
system( [0,1], "frobincate", #files); # Must return exitval 0 or 1
my #lines = capture($EXIT_ANY, "baznicate", #files); # Any exitval is OK.
foreach my $record (#lines) {
system( [0, 32], "barnicate", $record); # Must return exitval 0 or 32
}
IPC::System::Simple is pure Perl, has no dependencies, and works on both Unix and Windows systems. Unfortunately, it doesn't provide a way of capturing STDERR, so it may not be suitable for all your needs.
IPC::Run3 provides a clean and easy interface into re-plumbing all three common filehandles, but unfortunately it doesn't check to see if the command was successful, so you'll need to inspect $? manually, which is not at all fun. Providing a public interface for inspecting $? is something which is on my to-do list for IPC::System::Simple, since inspecting $? in a cross-platform fashion is not a task I'd wish on anyone.
There are other modules in the IPC:: namespace that may also provide you with assistance. YMMV.
All the best,
Paul
There are three basic ways of running external commands:
system $cmd; # using system()
$output = `$cmd`; # using backticks (``)
open (PIPE, "cmd |"); # using open()
With system(), both STDOUT and STDERR will go the same place as the script's STDOUT and STDERR, unless the system() command redirects them. Backticks and open() read only the STDOUT of your command.
You could also call something like the following with open to redirect both STDOUT and STDERR.
open(PIPE, "cmd 2>&1 |");
The return code is always stored in $? as noted by #Michael Carman.
If you're getting really complicated, you might want to try Expect.pm. But that's probably overkill if you don't need to also manage sending input to the process as well.
I found IPC:run3 to be very helpful. You can forward all child pipes to a glob or a variable; very easily! And exit code will be stored in $?.
Below is how i grabbed stderr which i knew would be a number. The cmd output informatic transformations to stdout (which i piped to a file in the args using >) and reported how many transformations to STDERR.
use IPC::Run3
my $number;
my $run = run3("cmd arg1 arg2 >output_file",\undef, \undef, \$number);
die "Command failed: $!" unless ($run && $? == 0);