This works fine:
#!/usr/bin/perl -w
#
#pipe2 - use pipe and fork so child can send to parent
use IO::Handle;
pipe(READER, WRITER);
WRITER->autoflush(1);
if ($pid = fork) {
close WRITER;
chomp($line = <READER>);
print "Parent Pid $$ just read this: `$line'\n";
close READER;
waitpid($pid,0);
} else {
die "cannot fork: $!" unless defined $pid;
close READER;
print WRITER "Child Pid $$ is sending this\n";
close WRITER; # this will happen anyway
exit;
}
But when I try to make the reader not block with fcntl as this:
use IO::Handle;
use Fcntl;
pipe(READER, WRITER);
WRITER->autoflush(1);
if ($pid = fork) {
close WRITER;
fcntl(fileno(READER),F_GETFL,$flags)
or die "Couldn't get flags for READER : $!\n";
$flags |= O_NONBLOCK;
fcntl(fileno(READER), F_SETFL, $flags)
or die "Couldn't set flags for READER $!\n";
chomp($line = <READER>);
print "Parent Pid $$ just read this: `$line'\n";
close READER;
waitpid($pid,0);
} else {
die "cannot fork: $!" unless defined $pid;
close READER;
print WRITER "Child Pid $$ is sending this\n";
close WRITER; # this will happen anyway
exit;
}
I get:
fcntl() on unopened filehandle 3 at pip2.pl line 14.
Couldn't get flags for READER : Bad file descriptor
I need to "watch" the child and do something if it doesn't respond properly within a specific time. I need async communications with the child.
fcntl(fileno(READER),F_GETFL,$flags)
fcntl gets a filehandle, not a file number. Use fcntl(READER,... not fcntl(fileno(READER), ....
Apart from that it is recommended to not use global symbols for file handles. Better use local variables, i.e.
pipe(my $reader, my $writer);
$writer->autoflush();
...
Apart from not being in potential conflict to other global symbols and avoiding risks of uncaught typos, this will also close the respective file handles ones the variable gets out of scope.
Related
All im trying to do here is:
Create a pipe
Fork a sub-process
Parent gets a message from the user, sends it to the child
Child gets the message, prints it to the screen
Repeat until user doesn't enter a message
This is what I got now so far: I still need to implement a loop to repeat until user doesn't enter a message.
#!perl -w
use strict
pipe(PIPE_READ, PIPE_WRITE);
autoflush PIPE_WRITE 1;
my $pid = fork();
if ($pid) {
&write_pipe ($pid);
waitpid($pid,0);
}
elsif (defined $pid) {
&read_pipe;
}
else {
die "cannot fork: $!";
}
sub write_pipe {
print "pid $$ \n";
print "Enter message: ";
sleep 1;
my $usr_msg = <>;
print "Parent pid = $$ message = $usr_msg";
print PIPE_WRITE "$usr_msg\n";
close(PIPE_WRITE)
close(PIPE_READ);
}
sub read_pipe {
print "child pid = $pid";
my $msg_read = <PIPE_READ>;
close(PIPE_WRITE);
print "received from pipe $msg_read";
}
First of all, you are unintentionally creating two children. Replace
if ($pid = fork)
with
if ($pid)
In the child, call
close(PIPE_WRITE);
In the parent, call
close(PIPE_READ);
In the parent (when done writing), call
close(PIPE_WRITE);
As for reading from a file handle until EOF or a specific command is entered,
while (my $line = <>) {
last if $line =~ /^(?:quit|exit)$/;
...
}
I need to work with some libraries that unfortunately log diagnostic
messages to STDOUT and STDERR. By using tie, I can redirect those
writes to a function that captures those. Since I don't want all
STDOUT and STDERR output of my programs to be captured thtough the
tied handle, I'd like to do this only for certain packages.
I have come up with a solution where the actual behavior is determined
by looking at caller() as can be seen below, but I have the feeling
that there has to be a better way... Is there a more elegant solution?
package My::Log::Capture;
use strict;
use warnings;
use 5.010;
sub TIEHANDLE {
my ($class, $channel, $fh, $packages) = #_;
bless {
channel => lc $channel,
fh => $fh,
packages => $packages,
}, $class;
}
sub PRINT {
my $self = shift;
my $caller = (caller)[0];
if ($caller ~~ $self->{packages}) {
local *STDOUT = *STDOUT;
local *STDERR = *STDERR;
given ($self->{channel}) {
when ('stdout') {
*STDOUT = $self->{fh};
}
when ('stderr') {
*STDERR = $self->{fh};
}
}
# Capturing/Logging code goes here...
} else {
$self->{fh}->print(#_);
}
}
1;
package main;
use My::Foo;
# [...]
use My::Log::Capture;
open my $stderr, '>&', *STDERR;
tie *STDERR, 'My::Log::Capture', (stderr => $stderr, [qw< My::Foo >]);
# My::Foo's STDERR output will be captured, everyone else's STDERR
# output will just be relayed.
Aside from fixing the libraries, I can think of only one solution that might be better.
You can re-open STDOUT and STDERR file handles into your own file handles. Then, re-open STDOUT and STDERR with your tied handles.
For example, here's how you do it for STDOUT:
open my $fh, ">&", \*STDOUT or die "cannot reopen STDOUT: $!";
close STDOUT;
open STDOUT, ">", "/tmp/test.txt";
say $fh "foo"; # goes to real STDOUT
say "bar"; # goes to /tmp/test.txt
You can read perldoc -f open for all the gory details on what ">&" and such does.
Anyway, instead of "/tmp/test.txt" you can replace that open call with the setup for your tied file handle.
Your code will have to always use an explicit file handle to write or use select to switch file handles:
select $fh;
say "foo"; # goes to real STDOUT
select STDOUT;
say "bar"; # goes to /tmp/test.txt
I want to redirect STDERR and STDOUT to a variable. I did this.
close(STDOUT);
close(STDERR);
my $out;
open(STDOUT, ">>", \$out);
open(STDERR, ">>", \$out);
for(1..10)
{
print "print\n"; # this is ok.
warn "warn\n"; # same
system("make"); # this is lost. neither in screen nor in variable.
}
The problem with system. I want the output of this call to be captured too.
use Capture::Tiny!
Are you seeking to capture the output in a variable? If so, you have use backticks or qx{} with appropriate redirection. For example, you could use:
#/usr/bin/env perl
use strict;
use warnings;
# Ensure we have a way to write messages
open my $fh, '>', "output" or die;
close(STDOUT);
close(STDERR);
my $out;
open(STDOUT, ">>", \$out) or do { print $fh, "failed to open STDOUT ($!)\n"; die };
open(STDERR, ">>", \$out) or do { print $fh, "failed to open STDERR ($!)\n"; die };
foreach my $i (1..10)
{
print "print $i\n";
warn "warn $i\n";
my $extra = qx{make pth$i 2>&1};
print $fh "<<$i>><<$out>><<$extra>>\n";
}
(I happen to have programs pth1, pth2 and pth3 in the directory - they were made OK; pth4 and above write errors to stderr; the redirection was necessary.)
You should always check the success of operations such as open().
Why is this necessary? Because writing to a variable requires the cooperation of the process doing the writing - and make doesn't know how to cooperate.
There are several ways to redirect and restore STDOUT. Some of them work with STDERR too. Here are my two favorites:
Using select:
my $out;
open my $fh, ">>", \$out;
select $fh;
print "written to the variable\n";
select STDOUT;
print "written to original STDOUT\n";
Using local:
my $out
do {
local *STDOUT;
open STDOUT, ">>", \$out;
print "written to the variable\n";
};
print "written to original STDOUT\n";
Enjoy.
The reason this is happening is that the STDOUT and STDERR "filehandles" are not equivalent to stderr and stdout handles provided by the shell to the perl binary. In order to achieve what you want, you should use open instead of system
Why not use IPC::Open3?
TLDR Answer
use Capture::Tiny;
Merged STDOUT and STDERR
If you want STDOUT (from print()s) and STDERR (from warn()s) to be merged, then use...
my ($merged, #result) = capture_merged { print "Hello, world!" }; # static code
my ($merged, #result) = capture_merged { eval $codetoeval }; # code in variable
Separated STDOUT and STDERR
If you want them separated...
my ($stdout, $stderr, #result) = capture { print "Hello, world!" }; # static code
my ($stdout, $stderr, #result) = capture { eval $codetoeval }; # code in variable
Results of Eval
#result indicates the success, with success being [1], and failure being []. Tiny has a ton of other functions that you can look through for other cases, like code references, etc.. But I think the code above should cover most of any Perl developer's needs.
Say I have a resource (e.g. a filehandle or network socket) which has to be freed:
open my $fh, "<", "filename" or die "Couldn't open filename: $!";
process($fh);
close $fh or die "Couldn't close filename: $!";
Suppose that process might die. Then the code block exits early, and $fh doesn't get closed.
I could explicitly check for errors:
open my $fh, "<", "filename" or die "Couldn't open filename: $!";
eval {process($fh)};
my $saved_error = $#;
close $fh or die "Couldn't close filename: $!";
die $saved_error if $saved_error;
but this kind of code is notoriously difficult to get right, and only gets more complicated when you add more resources.
In C++ I would use RAII to create an object which owns the resource, and whose destructor would free it. That way, I don't have to remember to free the resource, and resource cleanup happens correctly as soon as the RAII object goes out of scope - even if an exception is thrown. Unfortunately in Perl a DESTROY method is unsuitable for this purpose as there are no guarantees for when it will be called.
Is there a Perlish way to ensure resources are automatically freed like this even in the presence of exceptions? Or is explicit error checking the only option?
I think that's what Scope::Guard was designed to help with.
#!/usr/bin/perl
use strict; use warnings;
use Scope::Guard;
my $filename = 'file.test';
open my $fh, '>', $filename
or die "Couldn't open '$filename': $!";
{
my $sg = Scope::Guard->new(
sub {
close $fh or die "Could not close";
warn "file closed properly\n";
}
);
process($fh);
}
sub process { die "cannot process\n" }
However, as #Philip notes in the comments, Scope::Guard utilizes the DESTROY method which creates some uncertainty as to when the scope exit code will be run. Modules such as Hook::Scope and Sub::ScopeFinalizer look fine as well although I have never used them.
I do like Try::Tiny for its clean interface and sheer simplicity and it will help you handle exceptions the correct way:
#!/usr/bin/perl
use strict; use warnings;
use Try::Tiny;
my $filename = 'file.test';
open my $fh, '>', $filename
or die "Couldn't open '$filename': $!";
try {
process($fh);
}
catch {
warn $_;
}
finally {
close $fh
and warn "file closed properly\n";
};
sub process { die "cannot process\n" }
My module Scope::OnExit is intended for exactly that.
The nice thing about lexical filehandles is that they'll get closed (and freed) when they go out of scope. So you can just do something like this:
{
# bare block creates new scope
open my $fh, "<", "filename" or die "Couldn't open filename: $!";
eval { process($fh) };
# handle exceptions here
close $fh or die "Couldn't close filename: $!";
}
# $fh is now out of scope and goes away automagically.
I have an app running under Catalyst+FastCGI. And I want it to fork() to do some work in background.
I used this code for plain CGI long ago (and it worked):
defined(my $pid = fork) or die "Can't fork: $!";
if ($pid) {
# produce some response
exit 0;
}
die "Can't start a new session: $!" if setsid == -1;
close STDIN or die $!;
close STDOUT or die $!;
close STDERR or die $!;
# do some work in background
I tried some variations on this under FastCGI but with no success. How should forking be done under FastCGI?
Update: This is what I have now:
defined(my $pid = fork) or die "Can't fork: $!";
if ($pid) {
$c->stash->{message} = 'ok';
$c->detach($c->view('JSON'));
}
die "Can't start a new session: $!" if setsid == -1;
close STDIN or die $!;
close STDOUT or die $!;
close STDERR or die $!;
# do some work, then exit()
I send the request with AJAX call, and have the "502 Bad Gateway" error in the firebug console.
I think this FAQ has the right answer:
https://fastcgi-archives.github.io/FastCGI_FAQ.html#Perlfork
You should do $request->Detach(); before the fork, and $request->Attach(); after the forking piece of code is done, where $request is the current FCGI object. At least, it worked for me.
In case of Catalyst::Engine::FastCGI you may need to patch the Catalyst::Engine::FastCGI to get access to the $request variable, since it is local to the run() method there (in the version that is currently on CPAN).
This part isn't going to work well with FastCGI:
if ($pid) {
# print response
exit 0;
}
You would exit in the parent process, thus it will stop responding to FastCGI requests.
The setsid()s and close()s are to daemonize your background process. This may or may not be necessary in your case.