I want to read the output of a command including the stderr into HANDLE:
open(HANDLE, "-|", $cmd, #args);
But the above command just reads the stdin?
How can I also read the stderr?
The IPC::Run module provides a run function that works like a supercharged system. It allows us to collect the output of STDERR and STDOUT combined:
run [$cmd, #args], '&>', \my $output;
after that, the $output variable holds the combined output as a string.
Example:
use IPC::Run qw/ run /;
run ['perl', '-E say "stdout"; say STDERR "stderr"'], '&>', \my $output;
print uc $output;
Output:
STDOUT
STDERR
I don't know how to use a filehandle in place of the scalar reference so that the output can be read normally in a while(<$fh>) loop.
You're going to want to look at IPC::Open3 which starts a process and provide separate file handles for writing to the child, and reading the child's STDOUT and STDERR.
I use Bash redirection as below in my perl code:
open (CMDOUT, "df -h 2>&1 |");
Related
I wrote a routine to "safely" execute some command, and I wanted to capture STDOUT and STDERR in string variables using open(STDOUT, '+<', \$stdout) and similar for STDERR.
I verified via print "Test\n" and print STDERR "Test2\n" that the redirection works inside the routine (I can find the outputs in $stdout and $stderr).
However when I run the command through system() (Perl's version), the output still goes to the terminal.
So I wonder: Is opening a scalar value available for Perl's own I/O only?
And if so, how would I capture the STDOUT and STDERR from the system() call without using temporary files (having their own issues)?
(I've seen https://stackoverflow.com/a/109672/6607497 already)
The preferred solution (if such exists) should use as few extra packages as possible, and it should run with SLES 12 or SLES 15 (openSUSE Leap 15.2).
Those distributions only offer a limited set of Perl modules.
You can easily do this using IPC::Run to capture output.
Test script that writes to standard output and error:
#!/bin/sh
# demo.sh
echo "To Standard Output"
echo "To Standard Error" >&2
and perl script that runs it:
#!/usr/bin/env perl
use warnings;
use strict;
use IPC::Run qw/run/;
my ($out, $err);
run ["sh", "demo.sh"], \undef, \$out, \$err;
print "Standard output: ", $out;
print "Standard error: ", $err;
gives the following output:
$ perl demo.pl
Standard output: To Standard Output
Standard error: To Standard Error
Alternative using IPC::Run3 (Which might be more desirable if you don't need any of IPC::Run's more advanced features):
#!/usr/bin/env perl
use warnings;
use strict;
use IPC::Run3;
my ($out, $err);
run3 ["sh", "demo.sh"], \undef, \$out, \$err;
print "Standard output: ", $out;
print "Standard error: ", $err;
I'm trying to capture the output of the command executed as a different user using:
my $command = qq(sudo su - <username> -c '/usr/bin/whatever');
my $pid = open $cmdOutput, "-|", $command;
How can I capture the STDERR of /usr/bin/whatever?
I tried
$pid = open $cmdOutput, "-|", $command || die " something went wrong: $!";
but it looks like this is capturing the possible errors of "open" itself.
I also tried
my $command = qq(sudo su - <username> -c '/usr/bin/whatever' 2>/tmp/error.message);
which will redirect the STDERR to the file, which I can parse later, but I wanted some more straightforward solution.
Also, I only want to use core modules.
This is covered thoroughly in perlfaq8. Since you are using a piped open, the relevant examples are those that go by open3 from the core IPC::Open3 module.
Another option is to use IPC::Run for managing your processes, and the pump function will do what you need. The IPC::Open3 documentation says for IPC::Run
This is a CPAN module that has better error handling and more facilities than Open3.
With either of these you can manipulate STDOUT and STDERR separately or together, as needed. For convenient and complete output capture also see Capture::Tiny.
Other than 2>output redirection, there are no more elementary methods for the piped open.
If you don't mind mixing the streams or losing STDOUT altogether, another option is
my $command = 'cmd 2>&1 1>/dev/null' # Remove 1>/dev/null to have both
my $pid = open my $cmdOutput, "-|", $command;
while (<$cmdOutput>) { print } # STDERR only
The first redirection merges STDERR stream with STDOUT so you get them both, and mixed (with STDOUT subject to buffering, thus things may well come out of order). The second redirect sends the STDOUT away so with it in place you read only the command's STDERR from the handle.
The question is about running an external command using open but I'd like to mention that the canonical and simple qx (backticks) can be used in the same way. It returns the STDOUT so redirection just like above is needed to get STDERR. For completeness:
my $cmd = 'cmd_to_execute';
my $allout = qx($cmd 2>&1); # Both STDOUT and STDERR in $out, or
my $stderr = qx($cmd 2>&1 1>/dev/null); # Only STDERR
my $exit_status = $?;
The qx puts the child process exit code (status) in $?. This can then be inspected for failure modes; see a summary in the qx page or a very thorough discussion in I/O operators in perlop.
Note that the STDERR returned this way is from the command, if it ran. If the command itself couldn't be run (for a typo in command name, or fork failed for some reason) then $? will be -1 and the error will be in $!.
As suggested by zdim I used the IPC::Open3 module for the matter and I've got something like this doing the job for me
$instanceCommand = qq(sudo su - <username> -c '<command>');
my ($infh,$outfh,$errfh,$pid);
$errfh = gensym();
$pid = open3($infh, $outfh, $errfh, $instanceCommand);
my $sel = new IO::Select;
$sel->add($outfh,$errfh);
while (my #ready = $sel->can_read){
foreach my $fh (#ready){
my $line =<$fh>;
if (not defined $line){
$sel->remove($fh);
next;
}
if ($fh == $outfh){
chomp($line);
#<----- command output processing ----->
}
elsif ($fh == $errfh){
chomp $line;
#<----- command error processing ----->
}
else {
die "Reading from something else\n";
}
}
}
waitpid($pid, 0);
Maybe not completely bullet proof, but its working fine for me. Even whilst executing funny cascaded script as < command > .
The desired destination, opened for writing, could be dup()'ed to FD #2
I am trying to redirect the STDOUT and STDERR into a log file, but I also want to print those streams to the console. I am using Perl, and my code looks like this:
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = capture {
system($command);
};
print $stdout;
print $stderr;
It works, but if the command waits for a user input, the program doesn't print $stdout to STDOUT until a key is pressed. Is there any way to print $stdout to STDOUT before it needs user input? Line by line approach would be fine.
Thank you in advance!
Well, I'm not familiar with Capture::Tiny so this may not be entirely relevant - generally though, if I'm looking to handle STDIN, STDOUT and/or STDERR then I look towards either open (if it's just one), or IPC::Open2 and [IPC::Open3][1] which open multiple file descriptors attached to a process.
use IPC::Open3;
$pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR,
'some cmd and args', 'optarg', ...);
use IPC::Open2;
$pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some', 'cmd', 'and', 'args');
Although I would suggest rather than the examples - you can use lexical filehandles:
my($chld_out, $chld_in);
$pid = open2($chld_out, $chld_in, 'some cmd and args');
You can then read and write from your filehandles (bear in mind though - by default a read will be blocking).
You do need to close and then (ideally) waitpid to clear up the process when you're done though.
You need to use Capture::Tiny's tee instead of capture.
The tee function works just like capture, except that output is captured as well as passed on to the original STDOUT and STDERR.
Just replace the function call and your output will end up both in the variables and on the screen.
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = tee {
system($command);
};
Simple approach that I could think of:
#! /usr/bin/perl -w
# Using perl one liner as a command here
# which prints a string to STDOUT and STDERR
my $cmd = "perl -e 'print STDOUT \"stdout\n\"; print STDERR \"stderr\n\";'";
my $log = "./test.log";
# using "2>&1" we are redirecting stderr also to stdout
system("$cmd 2>&1 | tee $log");
# Sample run results in both the strings getting printed to console as well as to log file
> perl test.pl
stderr
stdout
> cat test.log
stderr
stdout
So I have:
test.pl > test.log
is there a way to know inside test.pl that I am outputing to 'test.log'? At the end of my script I want to do some manipulation of test.log without hardcoding the name.
Maybe. The following works on Linux, but will not be very portable to other systems...
#!/usr/bin/env perl
use strict;
use warnings;
my $out = readlink("/proc/$$/fd/1");
print STDERR "I am being output to $out\n";
Naturally, this is probably a bad idea. Better to explicitly open the file and write to it in Perl, rather than having the shell set up redirections.
You can redirect standard output from perl, with minimal changes to your script,
test.pl test.log
my ($file) = #ARGV;
if (#ARGV) {
open STDOUT, ">", $file or die $!;
}
print "output is redirected to $file\n";
# use $file at the end
I am trying to redirect perl system command to output file with the following code along with time but its not working ??
$cmd="echo hi";
($second, $minute, $hour) = localtime();
$time="$hour:$minute:$second";
system("$time>new.txt");
system("$cmd 1>>new.txt 2>>&1");
If you want to write the variable $time to a text file, open a writeable filehandle and print it to your file instead.
open(my $outfile, '>', 'new.txt');
print $outfile $time;
...
Secondly, your output redirection should read:
1>>new.txt 2>&1
Which means "append STDOUT (1) to new.txt, redirect STDERR (2) to STDOUT (1)". Having >> makes no sense for the second part.
Finally, I (and every other perl programmer) would strongly recommend using strict and warnings pragmas in your scripts. This will help you pick up on any errors or potential problems in your scripting. Once you've done this, all variables must be declared with my, which is a good habit to get in to anyway. So after all that, your script should look something like this:
# recommended pragmas:
use strict;
use warnings;
# declare all new variables with "my"
my $cmd="echo hi";
my ($second, $minute, $hour) = localtime();
my $time="$hour:$minute:$second";
# open a writeable filehandle and print to the filehandle
open(my $outfile, '>', 'new.txt');
print $outfile $time,"\n"; # I've added a newline character here so that
# the time and the command's output are on different lines ;)
system("$cmd 1>>new.txt 2>&1");