The documented example in perldoc IPC::Open2 (read from parent STDIN and write to already open handle) is a simplified version of what I'm trying to achieve. Namely, parent writes a preamble to a output file, then a subprocess writes its output directly to the same file.
I've made a simple child script which reads input lines and prints to STDERR and STDOUT. The STDOUT being the the 'already open handle' from the parent.
#!/usr/bin/env perl
##parent.pl
use IPC::Open2;
# read from parent STDIN and write to already open handle
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2($file, "<&STDIN", "./child.pl");
# reap zombie and retrieve exit status
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
#!/usr/bin/env perl
##child.pl
while(<STDIN>){
print STDOUT "STDOUT: ",$_;
print STDERR "STDERR: ", $_;
}
print STDERR "END OF CHILD\n";
An example run of parent.pl:
Hello
^D
STDERR: Hello
STDERR: END OF CHILD
However, I don't see the expected "STDOUT: Hello" in the output file 'outfile.txt'
Is there some additional setup I've missed to get this example to work?
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2($file, "<&STDIN", "./child.pl");
This will create a new pipe, and overwrite the $file variable with a handle refering to the read end of the pipe, closing the old file handle in the process ;-)
In order to pass an existing file handle to open2 or open3, you want to use the >&FILEHANDLE format, but I wasn't able to figure out any way to do that when FILEHANDLE is a local variable, as your my $file.
But the undocumented >&NUM or >&=NUM forms (where NUM is a file descriptor number) just work:
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2('>&'.fileno($file), '<&STDIN', './child.pl');
Example:
$ perl -MIPC::Open2 -e '
open my $f, ">foo";
open2(">&".fileno($f), "<&STDIN", "echo bar")
'; cat foo
bar
Related
In Perl, I can open a child process and pipe its output to the calling Perl script, like this:
open(my $cmd, '-|', 'ls') or die $!;
while (<$cmd>) {
print $_;
}
This prints the files in my working folder, e.g.:
> foo.txt
> bar.txt
> ...
But I would like to do the same thing for a child process that remains open, e.g. to pipe tcpdump's stdout to Perl, I attempt something similar:
open(my $cmd, '-|', 'tcpdump') or die $!;
while (<$cmd>) {
print $_;
}
... but other than the tcpdump startup text, this doesn't mete out any http logs. It just seems to hang. What gives?
It was buffering issues. I needed to add the -U flag to tcpdump. This causes packets to be written as soon as they're received.
I am working on writing a Perl script that will install software changes for our application. Mostly this involves running sql files in sqlplus. While I do log each sql file that gets run and grep each log file for errors, I would like to get a single log file of everything. That way if something weird happens during the install I will have a single file with everything that happened while the patch was being installed. If I was writing this in bash I would do the following:
exec >my_log_file
exec 2>&1
... some code runs
exec 1>&-
exec 2>&-
The exec command will redirect anything that goes to stdout and stderr to my log file. At the end of the script I turn that off so that I can grep the log for errors. I can also redirect stdout back to /dev/tty and cat the log if I want.
Is there a pure Perl way to get the same effect? I'm guessing that if I run the above code with the system command in Perl I am creating a new process and thus would not get all of the output from the script. I also thought about creating a bash wrapper script for the logging, but that would require the customer to run the wrapper script. It would be easier if I can do this in Perl.
Thanks.
The following is loosely adapted from an example in "perldoc -f open".
# Take copies of the original STDOUT and STDERR.
open(my $oldout, ">&", \*STDOUT) or die "Can't dup STDOUT: $!";
open(my $olderr ">&", \*STDERR) or die "Can't dup STDERR: $!";
# Open STDOUT to a log file
open(STDOUT, '>', 'my_file.log') or die "Can't redirect STDOUT: $!";
# Copy STDERR to STDOUT
open(STDERR, ">&STDOUT") or die "Can't dup STDOUT: $!";
# Unbuffer STDOUT and STDERR
select STDERR; $| = 1;
select STDOUT; $| = 1;
# Do stuff that prints to STDOUT and STDERR
...
# Restore original STDOUT and STDERR
open(STDOUT, ">&", $oldout) or die "Can't dup \$oldout: $!";
open(STDERR, ">&", $olderr) or die "Can't dup OLDERR: $!";
I need to call an external logging process from a Perl script that takes data passed to it and writes it to a network service. That is easy enough to do. However, I have the additional requirement that any writes to STDERR from the parent process gets redirected to the external process.
What I've tried do is open a file handle to a write pipe of the external process, then redirect STDERR to the file handle. Here is my test script, that unfortunately does not work yet.
#!/usr/bin/perl
use strict;
use warnings;
# open write filehandle to external process
open my $fh, '|-', 'pipefile_http',
or die "Couldn't open logfile: $!\n";
# redirect STDERR from parent process to same write filehandle to child process
my $fileno = fileno($fh);
open STDERR, ">&$fileno" or die "Couldn't switch STDERR to fileno $fileno: $!\n";
print $fh "1. print to file handle\n";
print STDERR "2. print to STDERR\n";
print "3. print to STDOUT\n";
close $fh;
exit 0;
When I run this script, it successfully redirects the print call to STDERR to the external logging process, but the print call to $fh does not work (the message disappears). Also, the script hangs indefinitely after it successfully prints message #3 to STDOUT. When I run the script with strace, I can see that the script is hanging on a waitpid() call (the pid of the external process).
Any advice on how I can do this?
Just reassign STDERR:
#!/usr/bin/perl
use strict;
use warnings;
# open write filehandle to external process
open my $fh, '|-', 'pipefile_http',
or die "Couldn't open logfile: $!\n";
# reassign STDERR
*STDERR = $fh;
print $fh "1. print to file handle\n";
print STDERR "2. print to STDERR\n";
print "3. print to STDOUT\n";
close $fh;
exit 0;
I am piping the directory command output to file handle, followed by the print to the file handle. I wish to append some text to the dir/ls output.
open (FH, "| dir") or die "$OS_ERROR";
print FH ("sometext") or die "$OS_ERROR";
while (<FH>){
print;
}
When I execute the Perl script, I see the directory contents, but I do not see the text printed using the print statement, in this case I do not see sometext. What am I missing?
To explain in more detail - I want pipe dir contents to FH, and following that I want to append some text to the same filehandle FH . I have referred link http://perldoc.perl.org/perlopentut.html#Pipe-Opens
You are not redirecting anything: You are piping your script's output to either the cmd.exe builtin dir or an alias to ls depending on your OS (which means, you might run into trouble if you run this script with Cygwin's ls in your path on Windows).
Writing to dir does not seem useful. If you wanted to filter dirs output, i.e. take the output from running dir and manipulate it before printing, you should pipe it into your script and you should print the processed output.
#!/usr/bin/env perl
use strict; use warnings;
my $pid = open my $dir_out, '-|', 'cmd.exe /c dir';
die "Cannot open pipe: $!\n" unless $pid;
my $output_file = 'output.txt';
open my $my_out, '>', $output_file
or die "Cannot open '$output_file': $!";
while (my $line = <$dir_out>) {
$line =~ s/bytes free/peons liberated/;
print $my_out $line;
}
close $my_out
or die "Cannot close '$output_file': $!";
close $dir_out
or die "Cannot close pipe: $!\n";
Of course, I am assuming there are other things going on in your program and this is only a small part of it. Otherwise, you don't need to write this much code for a simple filter.
You cannot write to FH with print and then expect to read from FH in the next statement. File handles are not FIFOs (by default).
The open gives you a writable file handle, the read end of which is connected to the stdin of dir. Reading from a write file handle just gives you nothing.
What do you actually want to achieve? Send some text to the dir program, or read output of the dir program?
Since in the comment you said you want to read the output of the dir command, you have the open command wrong; use "dir |" instead of "| dir" and read the Perl Open Tutorial.
Maybe this does what you want to do:
open (FH, "dir|") or die "$OS_ERROR";
while (<FH>){
print;
}
print "sometext\n";
With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl.
With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr).
My stdout is going to be a line-by-line output of key-value pairs that I want to insert into a mao in perl. That is why I want to redirect only my stdout from my Java program in perl. Is that possible?
Note: If I get errors, the errors get printed to stderr. One possibility is to check if anything gets printed to stderr so that I can quite the Perl script.
Canonically, if you're trying to get at the text output of a forked process, my understanding is that's what the backticks are for. If you need the exit status as well, you can check it with the $? special variable afterward, e.g.:
open my $fh, '>', "output.txt" or die $!;
print {$fh} `echo "Hello!"`;
print "Return code: $?\n";
Output to STDERR from the command in backticks will not be captured, but will instead be written directly to STDERR in the Perl program it's called from.
You may want to check out IPC::System::Simple -- it gives you many options for executing external commands, capturing its output and return value, and optionally dying if a bad result is returned.
This is one of the ways to do it.
open my $fh, '>', $file;
defined(my $pid = fork) or die "fork: $!";
if (!$pid) {
open STDOUT, '>&', $fh;
exec($command, #args);
}
waitpid $pid, 0;
print $? == 0 ? "ok\n" : "nok\n";
Use open in -| mode. When you close the filehandle, the exit status will be in $?.
open my $fh, '-|', "$command"; # older version: open my $fh, "$command |";
my #command_output = <$fh>;
close $fh;
my $command_status = $?;
From perldoc -f close
If the file handle came from a piped open, "close" will
additionally return false if one of the other system calls
involved fails, or if the program exits with non-zero status.
(If the only problem was that the program exited non-zero, $!
will be set to 0.) Closing a pipe also waits for the process
executing on the pipe to complete, in case you want to look at
the output of the pipe afterwards, and implicitly puts the exit
status value of that command into $? and
"${^CHILD_ERROR_NATIVE}".