I want to fork a child ( which is my C executable ) and share a pipe between perl and C process,
Is it possible to have STDOUT and STDIN to use as pipe.
Tried with following code but child process keep continue running.
use IPC::Open2;
use Symbol;
my $CHILDPROCESS= "chile.exe";
$WRITER = gensym();
$READER = gensym();
my $pid = open2($READER,$WRITER,$CHILDPROCESS);
while(<STDIN>)
{
print $WRITER $_;
}
close($WRITER);
while(<$READER>)
{
print STDOUT "$_";
}
The Safe Pipe Opens section of the perlipc documentation describes a nice feature for doing this:
The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you've opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
But according to the perlport documentation
open
open to |- and -| are unsupported. (Win32, RISC OS)
EDIT: This might only work for Linux. I have not tried it for Windows. There might be a way to emulate it in Windows though.
Here is what you want I think:
#Set up pipes to talk to the shell.
pipe(FROM_PERL, TO_C) or die "pipe: $!\n";
pipe(FROM_C, TO_PERL) or die "pipe: $!\n";
#auto flush so we don't have (some) problems with deadlocks.
TO_C->autoflush(1);
TO_PERL->autoflush(1);
if($pid = fork()){
#parent
close(FROM_PERL) or die "close: $!\n";
close(TO_PERL) or die "close: $!\n";
}
else{
#child
die "Error on fork.\n" unless defined($pid);
#redirect I/O
open STDIN, "<&FROM_PERL";
open STDOUT, ">&TO_PERL";
open STDERR, ">&TO_PERL";
close(TO_C) or die "close: $!\n";
close(FROM_C) or die "close $!\n";
exec("./cprogram"); #start program
}
Now you can communicate to the shell via FROM_C and TO_C as input and output, respectively.
This Q&A over on Perlmonks suggests that open2 runs fine on Windows, provided you manage it carefully enough.
Related
The documented example in perldoc IPC::Open2 (read from parent STDIN and write to already open handle) is a simplified version of what I'm trying to achieve. Namely, parent writes a preamble to a output file, then a subprocess writes its output directly to the same file.
I've made a simple child script which reads input lines and prints to STDERR and STDOUT. The STDOUT being the the 'already open handle' from the parent.
#!/usr/bin/env perl
##parent.pl
use IPC::Open2;
# read from parent STDIN and write to already open handle
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2($file, "<&STDIN", "./child.pl");
# reap zombie and retrieve exit status
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
#!/usr/bin/env perl
##child.pl
while(<STDIN>){
print STDOUT "STDOUT: ",$_;
print STDERR "STDERR: ", $_;
}
print STDERR "END OF CHILD\n";
An example run of parent.pl:
Hello
^D
STDERR: Hello
STDERR: END OF CHILD
However, I don't see the expected "STDOUT: Hello" in the output file 'outfile.txt'
Is there some additional setup I've missed to get this example to work?
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2($file, "<&STDIN", "./child.pl");
This will create a new pipe, and overwrite the $file variable with a handle refering to the read end of the pipe, closing the old file handle in the process ;-)
In order to pass an existing file handle to open2 or open3, you want to use the >&FILEHANDLE format, but I wasn't able to figure out any way to do that when FILEHANDLE is a local variable, as your my $file.
But the undocumented >&NUM or >&=NUM forms (where NUM is a file descriptor number) just work:
open my $file, '>', 'outfile.txt' or die "open failed: $!";
my $pid = open2('>&'.fileno($file), '<&STDIN', './child.pl');
Example:
$ perl -MIPC::Open2 -e '
open my $f, ">foo";
open2(">&".fileno($f), "<&STDIN", "echo bar")
'; cat foo
bar
In Perl, I can open a child process and pipe its output to the calling Perl script, like this:
open(my $cmd, '-|', 'ls') or die $!;
while (<$cmd>) {
print $_;
}
This prints the files in my working folder, e.g.:
> foo.txt
> bar.txt
> ...
But I would like to do the same thing for a child process that remains open, e.g. to pipe tcpdump's stdout to Perl, I attempt something similar:
open(my $cmd, '-|', 'tcpdump') or die $!;
while (<$cmd>) {
print $_;
}
... but other than the tcpdump startup text, this doesn't mete out any http logs. It just seems to hang. What gives?
It was buffering issues. I needed to add the -U flag to tcpdump. This causes packets to be written as soon as they're received.
A common functions script that our systems use uses a simple STDERR redirect in order to create user-specific error logs. it goes like this
# re-route standard out to text file
close STDERR;
open STDERR, '>>', 'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or die "couldn't redirect STDERR: $!";
Now, I copy-pasted this to my own functions script for a system-specific error log, and while it'll compile, it breaks the scripts that require it. Oddly enough, it doesn't even print the error that the children script are throwing. My slightly modified version looks like,
close STDERR;
open (STDERR, '>>', 'err/STDERR_SPORK.txt')
or die print "couldn't redirect STDERR: $!";
everything compiles fine in command prompt, -c returns ok, and if I throw a warn into the function script, and compile, it outputs properly. I still do not understand why though this kills the children. I cut out the redirect, and sure enough they work. Any thoughts?
die (and warn) writes to STDERR. If you close STDERR and then need to die as you attempt to reopen it, where would you expect to see the error message?
Since this is Perl, there are many ways to address this issue. Here are a couple.
open the file first to a tmp filehandle, reassign it to STDERR if everything goes ok
if (open my $tmp_fh, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt') {
close STDERR;
*STDERR = *$tmp_fh;
} else {
die "couldn't redirect STDERR: $!";
}
Use con. For programs that you run from a command line, most systems have a concept of "the current terminal". In Unix systems, it's /dev/tty and on Windows, it's con. Open an output stream to this terminal pseudo-file.
open STDERR, '>>',
'd:/output/Logs/STDERR_' . &parseUsername($ENV{REMOTE_USER}) . '.txt'
or do {
open my $tty_fh, '>', 'con';
print $tty_fh "couldn't redirect STDERR: $!";
exit 1;
};
After changing nothing in the script, and poking around in the server, and changing nothing, it now works as expected. I don't know what to say to be honest.
With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl.
With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr).
My stdout is going to be a line-by-line output of key-value pairs that I want to insert into a mao in perl. That is why I want to redirect only my stdout from my Java program in perl. Is that possible?
Note: If I get errors, the errors get printed to stderr. One possibility is to check if anything gets printed to stderr so that I can quite the Perl script.
Canonically, if you're trying to get at the text output of a forked process, my understanding is that's what the backticks are for. If you need the exit status as well, you can check it with the $? special variable afterward, e.g.:
open my $fh, '>', "output.txt" or die $!;
print {$fh} `echo "Hello!"`;
print "Return code: $?\n";
Output to STDERR from the command in backticks will not be captured, but will instead be written directly to STDERR in the Perl program it's called from.
You may want to check out IPC::System::Simple -- it gives you many options for executing external commands, capturing its output and return value, and optionally dying if a bad result is returned.
This is one of the ways to do it.
open my $fh, '>', $file;
defined(my $pid = fork) or die "fork: $!";
if (!$pid) {
open STDOUT, '>&', $fh;
exec($command, #args);
}
waitpid $pid, 0;
print $? == 0 ? "ok\n" : "nok\n";
Use open in -| mode. When you close the filehandle, the exit status will be in $?.
open my $fh, '-|', "$command"; # older version: open my $fh, "$command |";
my #command_output = <$fh>;
close $fh;
my $command_status = $?;
From perldoc -f close
If the file handle came from a piped open, "close" will
additionally return false if one of the other system calls
involved fails, or if the program exits with non-zero status.
(If the only problem was that the program exited non-zero, $!
will be set to 0.) Closing a pipe also waits for the process
executing on the pipe to complete, in case you want to look at
the output of the pipe afterwards, and implicitly puts the exit
status value of that command into $? and
"${^CHILD_ERROR_NATIVE}".
I have a Perl script which forks and daemonizes itself. It's run by cron, so in order to not leave a zombie around, I shut down STDIN,STDOUT, and STDERR:
open STDIN, '/dev/null' or die "Can't read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can't write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can't write to /dev/null: $!";
if (!fork()) {
do_some_fork_stuff();
}
The question I have is: I'd like to restore at least STDOUT after this point (it would be nice to restore the other 2). But what magic symbols do I need to use to re-open STDOUT as what STDOUT used to be?
I know that I could use "/dev/tty" if I was running from a tty (but I'm running from cron and depending on stdout elsewhere). I've also read tricks where you can put STDOUT aside with open SAVEOUT,">&STDOUT", but just the act of making this copy doesn't solve the original problem of leaving a zombie around.
I'm looking to see if there's some magic like open STDOUT,"|-" (which I know isn't it) to open STDOUT the way it's supposed to be opened.
# copy of the file descriptors
open(CPERR, ">&STDERR");
# redirect stderr in to warning file
open(STDERR, ">>xyz.log") || die "Error stderr: $!";
# close the redirected filehandles
close(STDERR) || die "Can't close STDERR: $!";
# restore stdout and stderr
open(STDERR, ">&CPERR") || die "Can't restore stderr: $!";
#I hope this works for you.
#-Hariprasad AJ
If it's still useful, two things come to mind:
You can close STDOUT/STDERR/STDIN in just the child process (i.e. if (!fork()). This will allow the parent to still use them, because they'll still be open there.
I think you can use the simpler close(STDOUT) instead of opening it to /dev/null.
For example:
if (!fork()) {
close(STDIN) or die "Can't close STDIN: $!\n";
close(STDOUT) or die "Can't close STDOUT: $!\n";
close(STDERR) or die "Can't close STDERR: $!\n";
do_some_fork_stuff();
}
Once closed, there's no way to get it back.
Why do you need STDOUT again? To write messages to the console? Use /dev/console for that, or write to syslog with Sys::Syslog.
Honestly though, the other answer is correct. You must save the old stdout (cloned to a new fd) if you want to reopen it later. It does solve the "zombie" problem, since you can then redirect fd 0 (and 1 & 2) to /dev/null.