I have a rather odd situation. I'm trying to automate the backup of a collection of SVN repositories with Perl. I'm shelling out to the svnadmin dump command, which sends the dump to STDOUT, and any errors it encounters to STDERR.
The command I need to run will be of the form:
svnadmin dump $repo -q >$backupFile
STDOUT will go to the backup file, but, STDERR is what I need to capture in my Perl script.
What's the right way to approach this kind of situation?
EDIT:
To clarify:
STDOUT will contain the SVN Dump data
STDERR will contain any errors that may happen
STDOUT needs to end up in a file, and STDERR needs to end up in Perl. At no point can ANYTHING but the original content of STDOUT end up in that stream or the dump will be corrupted and I'll have something worse than no backup at all, a bad one!
Well, there are generic ways to do it within perl too, but the bash solution (which the above makes me think you're looking for) is to redirect stderr first to stdout and then redirect stdout to a file. intuitively this doesn't make a whole lot of sense until you see what's happening internally to bash. But this works:
svnadmin dump $repo -q 2>&1 >$backupFile
However, do not do it the other way (ie, put the 2>&1 at the end), or else all the output of both stdout and stderr will go to your file.
Edit to avoid some people's confusion that this doesn't work:
What you want is this:
# perl -e 'print STDERR "foo\n"; print "bar\n";' 2>&1 > /tmp/f
foo
# cat /tmp/f
bar
and specifically you don't want this:
# perl -e 'print STDERR "foo\n"; print "bar\n";' > /tmp/f 2>&1
# cat /tmp/f
foo
bar
Here's one way:
{
local $/; # allow reading stderr as a single chunk
open(CMD, "svnadmin dump $repo -q 2>\&1 1>$backupFile |") or die "...";
$errinfo = <CMD>; # read the stderr from the above command
close(CMD);
}
In other words, use the shell 2>&1 mechanism to get stderr to a place where Perl can easily read it, and use 1> to get the dump sent to the file. The stuff I wrote about $/ and reading the stderr as a single chunk is just for convenience -- you could read the stderr you get back any way you like of course.
While tchrist is certainly correct that you can use handle direction and backticks to make this work, I can also recommend David Golden's Capture::Tiny module. It gives generic interfaces to capturing or tee-ing STDOUT and STDERR, from there you can do with them what you will.
This stuff is really easy. It’s what backticks were invented for, for goodness’ sake. Just do:
$his_error_output = `somecmd 2>&1 1>somefile`;
and voilà you’re done!
I don’t understand what the trouble is. Didn’t have your gazzintas drilled into you as a young child the way Jethro did? :)
From perldoc perlop for qx:
To read both a command's STDOUT and its STDERR separately, it's
easiest to redirect them separately to files, and then read from those
files when the program is done:
system("program args 1>program.stdout 2>program.stderr");
Related
I have a Perl CGI program. myperlcgi.pl
Within this program I have the following:
my $retval = system('extprogram')
extprogram has a print statement within.
The output from extprogram is being included within myperlcgi.pl
I tried adding
> output.txt 2>&1
to system call but did not change anything.
How do I prevent output form extprogram being used in myperlcgi.pl.
Surprised that stdout from system call is being used in myperlcgi.pl
The system command just doesn’t give you complete control over capturing STDOUT and STDERR of the executed command.
Use backticks or open to execute the command instead. That will capture the STDOUT of the command’s execution. If the command also outputs to STDERR, then you can append 2>&1 to redirect STDERR to STDOUT for capture in backticks, like so:
my $output = `$command 2>&1`;
If you really need the native return status of the executed command, you can get that information using $? or ${^CHILD_ERROR_NATIVE}. See perldoc perlvar for details.
Another option is to use the IPC::Open3 Perl library, but I find that method to be overkill for most situations.
I am working on a perl script in which I will run a command and get a output like : your id is <895162>. I will store this string and read the number from this string only . The problem is my main command will run in shell using the system command from perl .
like :
#ids.csh is "echo your id is <1123221>"
my $p = system ("./ids.csh 2>&1 > /dev/null");
print "$p\n";
$p =~ s/[^0-9]//g;
but the output is not copying to the $p file , Where I am going wrong ?
system runs a command but doesn't capture it. For that, you want qx/backticks:
my $p = `./ids.csh 2>/dev/null`;
As Len Jaffe said, you probably want to throw away stderr output (rather than displaying it to your screen or wherever your stderr is going), but not stdout (that contains the message you want to capture).
Note that when qx fails, it can do so for several different reasons and constructing a meaningful error message is not trivial. If you run into problems, consider using IPC::System::Simple's capture() instead.
You have redirected all of the output to /dev/null, which means that all of your output is being discarded.
I think you probably mean:
./ids.csh 2>/dev/null
Which will redirect stderr to /dev/null while leaving stdout unchanged.
I have a script which have their input/output plugged to named pipes. I try to write something to the first named pipe and to read the result from the second named pipe but nothing happen.
I used open then open2 then sysopen whithout success :
sysopen(FH, "/home/Moses/enfr_kiid5/pipe_CGI_Uniform", O_RDWR);
sysopen(FH2, "/home/Moses/enfr_kiid5/pipe_Detoken_CGI", O_RDWR);
print FH "test 4242 test 4242" or die "error print";
doesn't made error but didn't work : i can't see trace of the print, the test sentence is not write into the first named pipe and try to read from the second block the process.
Works here.
$ mkfifo pipe
$ cat pipe &
$ perl -e 'open my $f, ">", "pipe"; print $f "test\n"'
test
$ rm pipe
You don't really need fancy sysopen stuff, named pipes are really supposed to behave like regular files, albeit half-duplex. Which happens to be a difference between your code and mine, worth investigating if you really need this opening pattern.
You may need to unbuffer your output after opening the pipe:
sysopen(...);
sysopen(...);
$old=select FH;
$|=1;
select $old;
print FH...
And, as friedo says, add a carriage return ("\n") to the end of your print statement!
I'm writing a Perl script that should execute commands in shell and parse their output. As a shell I'm intended to use csh. I've started with this
my $out = `cmd`
but it doesn't capture STDERR, which I need too. Running sh with output redirection does nothing
my $out = `sh -c "cmd 2>&1"`
still captures only STDOUT, not STDERR.
Even redirecting to file in csh doesn't work for me
tcsh$ cmd >& logfile.log
still captures STDOUT only %)
The command I'm trying to execute is actuallty sh script and some commands in this script print into STDERR and I want to capture that output. If I execute sh -c "cmd 2>/dev/null" STDERR actually goes to /dev/null and only STDOUT is printed in terminal.
Could anyone help me with this?
I believe there is something you are not telling us. Are you on cygwin? Or Windows? Do you have a PERL5SHELL environment variable set?
There is something that you are not telling us because both of these work fine on the five platforms I can easily test on:
% perl -le '$out = `sh -c "grep missing /dev/nowhere 2>&1" | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
But in far, there is no reason to call sh(1) explicitly for shelling out. That’s because Perl always calls sh(1) for all its backtick, pipe opens, and system() shell-outs:
% perl -le '$out = `grep missing /dev/nowhere 2>&1 | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
The only except to this I can think of occurs on non-Unix systems, where because they have no /bin/sh, something else is defined.
But under no circumstances will simple shell-outs be calling tcsh(1) behind your back. You’d’ve had to’ve seriously hacked the perl(1) source to get that to happen. I also rather doubt you could (easily) hack the binary, since the string "/bin/tcsh" is going to be longer than "/bin/sh", and it isn’t very often going to be found in /bin/ anyway.
That you can’t get stderr redirection working even from the shell says something pretty weird is going on. I think we need more information.
Here, you are capturing the STDOUT of sh, which is not the STDERR of cmd:
my $out = `sh -c "cmd 2>&1"`;
Can you just run cmd directly?
my $out = `cmd 2>&1`;
Backquotes capture STDOUT not STDERR.
system will dump both stdout and stderr to their parent's settings.
If you want to capture STDERR, you need something like IPC::Open3:
Extremely similar to open2(), open3() spawns the given $cmd and connects CHLD_OUT for reading from the child, CHLD_IN for writing to the child, and CHLD_ERR for errors. If CHLD_ERR is false,
You said that running the command cmd >& logfile.log in tcsh sends only cmd's stdout to the log file, not its stderr. That doesn't make sense.
Try replacing cmd with the following script:
#!/bin/sh
echo stdout
echo STDERR 1>&2
Both "stdout" and "STDERR" should show up in logfile.log.
If so, then perhaps your "cmd" is doing something odd. My best guess is that cmd is writing to /dev/tty, not to either stdout or stderr; that wouldn't be affected by redirection.
To see what I mean, add this line to the above script:
echo tty > /dev/tty
I don't really have time to mock up an example as I normally would, nor even test one. I am thinking that you might try using Capture::Tiny to see if that helps.
I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors.
The code looks something like this:
use IPC::Open3;
my #cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here
my ($wget_pid,$wget_in,$wget_out,$wget_err);
if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,#cmd))){
print STDERR "failed to run open3\n";
exit(1)
}
close($wget_in);
my #wget_outs = <$wget_out>;
my #wget_errs = <$wget_err>;
print STDERR "wget stderr: ".join('',#wget_errs);
#page and errors outputted on the next line, seems wrong
print STDERR "wget stdout: ".join('',#wget_outs);
#clean up after this, not shown is running the filtering command, closing and waitpid'ing
When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page.
wget -O - http://10.10.1.72/index.php 2> stderr_test_file
When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another.
I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong.
Other important things:
I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output.
It's key that I be able to see what went wrong, not just reading $? >> 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever).
Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.
You are passing an undefined value as the error handle argument to open3, and as IPC::Open3 says:
If CHLD_ERR is false, or the same file descriptor as CHLD_OUT, then STDOUT and STDERR of the child are on the same filehandle (this means that an autovivified lexical cannot be used for the STDERR filehandle, see SYNOPSIS) ...
The workaround is to initialize $wget_err to something before calling open3:
my ($wget_pid, $wget_in, $wget_out, $wget_err);
use Symbol qw(gensym);
$wget_err = gensym();
if (!$wget_pid = open3( ... ) ) { ...