For my upcoming PulseAudio library I want to redirect STDERR and STDOUT to /dev/null logically this works,
sub _exec {
open (*STDERR, '>', '/dev/null');
open (*STDOUT, '>', '/dev/null');
CORE::system('pacmd', #_ ) or die $?;
However, this still outputs to the term....
sub _exec {
local ( *STDERR, *STDOUT );
open (*STDERR, '>', '/dev/null');
open (*STDOUT, '>', '/dev/null');
CORE::system('pacmd', #_ ) or die $?;
That leaves me with two questions
First and foremost, why am I experiencing the behavior that I'm seeing?
Secondly, is there a more efficient method that doesn't involve storing the old value and replacing it?
The child writes to fd 1 and 2, yet you didn't change fd 1 and 2. You just created new Perl variables (something the child knows nothing about) with fd 3 and 4 (something the child doesn't care about).
Here's one way of achieving what you want:
use IPC::Open3 qw( open3 );
sub _exec {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
open(local *CHILD_STDOUT, '>', '/dev/null') or die $!;
my $pid = open3(
'<&CHILD_STDIN',
'>&CHILD_STDOUT',
undef, # 2>&1
'pacmd', #_,
);
waitpid($pid, 0);
die $! if $? == -1;
die &? if $?;
}
open3 is pretty low level, but it's far higher level than doing it yourself*. IPC::Run and IPC::Run3 are even higher level.
* — It takes care for forking and assigning the handles to the right file descriptors. It handles error checking, including making pre-exec errors in the child appear to be the launch failures they are and not errors from the executed program.
Related
Is there any noticeable performance difference between these two ways of reading/writing a user file with Perl, on Linux?
Option 1:
open (READFILE, '<:utf8', "users/$_[0]") or die ("no read users/$_[0]");
# Do the reading
close (READFILE) or die;
# Do more stuff
open (WRITEFILE, '>:utf8', "users/$_[0]") or die ("no write users/$_[0]"); flock (WRITEFILE, 2) or die ("no lock users/$_[0]");
# Do the writing
close (WRITEFILE) or die;
Option 2:
open (USERFILE, '+<:utf8', "users/$_[0]") or die ("no open users/$_[0]"); flock (USERFILE, 2) or die ("no lock users/$_[0]");
# Do the reading
# Do more stuff
seek (USERFILE, 0, 0); truncate (USERFILE, 0);
# Do the writing
close (USERFILE) or die ("no write users/$_[0]");
The user files are not big, typically 20-40 lines or 2-4 KB each.
And would there be other reasons for choosing option 1 or 2 (or a 3rd option)?
Here is a benchmark which you can use to test it, I suspect that getting a new file descriptor is the part that takes longer if you close and then open again.
#!/usr/bin/env perl
use warnings;
use strict;
use open qw(:encoding(utf8) :std);
use Benchmark qw<cmpthese>;
my $text = <<TEXT;
I had some longer text here, but for better readability, just
these two lines.
TEXT
cmpthese(10_000,{
close => sub{
open my $file, '<',"bla" or die "$!";
my #array = <$file>;
close $file or die;
open $file, '>',"bla" or die "$!";
$file->print($text)
},
truncate => sub {
open my $file, '+<',"bla" or die "$!";
my #array = <$file>;
seek $file,0,0;
truncate $file, 0;
$file->print($text)
},
truncate_flock => sub {
open my $file, '+<',"bla" or die "$!";
flock $file, 2;
my #array = <$file>;
seek $file,0,0;
truncate $file, 0;
$file->print($text)
},
});
Output on my machine:
Rate close truncate_flock truncate
close 2703/s -- -15% -17%
truncate_flock 3175/s 17% -- -3%
truncate 3257/s 21% 3% --
A higher rate is better. Using close is 1.17 times slower.
But it heavily depends on how long your more stuff takes, since you're flocking the file in your truncate example and if another program is trying to access this file it may be slowed down because of that.
For my upcoming PulseAudio library I want to redirect STDERR and STDOUT to /dev/null logically this works,
sub _exec {
open (*STDERR, '>', '/dev/null');
open (*STDOUT, '>', '/dev/null');
CORE::system('pacmd', #_ ) or die $?;
However, this still outputs to the term....
sub _exec {
local ( *STDERR, *STDOUT );
open (*STDERR, '>', '/dev/null');
open (*STDOUT, '>', '/dev/null');
CORE::system('pacmd', #_ ) or die $?;
That leaves me with two questions
First and foremost, why am I experiencing the behavior that I'm seeing?
Secondly, is there a more efficient method that doesn't involve storing the old value and replacing it?
The child writes to fd 1 and 2, yet you didn't change fd 1 and 2. You just created new Perl variables (something the child knows nothing about) with fd 3 and 4 (something the child doesn't care about).
Here's one way of achieving what you want:
use IPC::Open3 qw( open3 );
sub _exec {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
open(local *CHILD_STDOUT, '>', '/dev/null') or die $!;
my $pid = open3(
'<&CHILD_STDIN',
'>&CHILD_STDOUT',
undef, # 2>&1
'pacmd', #_,
);
waitpid($pid, 0);
die $! if $? == -1;
die &? if $?;
}
open3 is pretty low level, but it's far higher level than doing it yourself*. IPC::Run and IPC::Run3 are even higher level.
* — It takes care for forking and assigning the handles to the right file descriptors. It handles error checking, including making pre-exec errors in the child appear to be the launch failures they are and not errors from the executed program.
I need to modify an existing Perl program. I want to pipe a string (which can contain multiple lines) through an external program and read the output from this program. This external program is used to modify the string. Let's simply use cat as a filter program. I tried it like this, but it doesn't work. (Output of cat goes to STDOUT instead of being read by perl.)
#!/usr/bin/perl
open(MESSAGE, "| cat |") or die("cat failed\n");
print MESSAGE "Line 1\nLine 2\n";
my $message = "";
while (<MESSAGE>)
{
$message .= $_;
}
close(MESSAGE);
print "This is the message: $message\n";
I've read that this isn't supported by Perl because it may end up in a deadlock, and I can understand it. But how do I do it then?
You can use IPC::Open3 to achieve bi-directional communication with child.
use strict;
use IPC::Open3;
my $pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR, 'cat')
or die "open3() failed $!";
my $r;
for(my $i=1;$i<10;$i++) {
print CHLD_IN "$i\n";
$r = <CHLD_OUT>;
print "Got $r from child\n";
}
This involves system programming, so it’s more than a basic question. As written, your main program doesn’t require full-duplex interaction with the external program. Dataflow travels in one direction, namely
string → external program → main program
Creating this pipeline is straightforward. Perl’s open has a useful mode explained in the “Safe pipe opens” section of the perlipc documentation.
Another interesting approach to interprocess communication is making your single program go multiprocess and communicate between—or even amongst—yourselves. The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you’ve opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
This is an open that involves a pipe, which gives nuance to the return value. The perlfunc documentation on open explains.
If you open a pipe on the command - (that is, specify either |- or -| with the one- or two-argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
To create the scaffolding, we work in right-to-left order using open to fork a new process at each step.
Your main program is already running.
Next, fork a process that will eventually become the external program.
Inside the process from step 2
First fork the string-printing process so as to make its output arrive on our STDIN.
Then exec the external program to perform its transformation.
Have the string-printer do its work and then exit, which kicks up to the next level.
Back in the main program, read the transformed result.
With all of that set up, all you have to do is implant your suggestion at the bottom, Mr. Cobb.
#! /usr/bin/env perl
use 5.10.0; # for defined-or and given/when
use strict;
use warnings;
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel {
given (open(STDIN, "-|") // die "$0: fork: $!") { # / StackOverflow hiliter
snow_fortress when 0;
exec #transform or die "$0: exec: $!";
}
}
given (open(my $fh, "-|") // die "$0: fork: $!") {
hotel when 0;
print while <$fh>;
close $fh or warn "$0: close: $!";
}
Thanks for the opportunity to write such a fun program!
You can use the -n commandline switch to effectively wrap your existing program code in a while-loop... look at the man page for -n:
LINE:
while (<>) {
... # your program goes here
}
Then you can use the operating system's pipe mechanism directly
cat file | your_perl_prog.pl
(Edit)
I'll try to explain this more carefully...
The question is not clear about what part the perl program plays: filter or final stage. This works in either case, so I will assume it is the latter.
'your_perl_prog.pl' is your existing code. I'll call your filter program 'filter'.
Modify your_perl_prog.pl so that the shebang line has an added '-n' switch: #!/usr/bin/perl -n or #!/bin/env "perl -n"
This effectively puts a while(<>){} loop around the code in your_perl_prog.pl
add a BEGIN block to print the header:
BEGIN {print "HEADER LINE\n");}
You can read each line with '$line = <>;' and process/print
Then invoke the lot with
cat sourcefile |filter|your_perl_prog.pl
I want to expand on #Greg Bacon's answer without changing it.
I had to execute something similar, but wanted to code without
the given/when commands, and also found there was explicit exit()
calls missing because in the sample code it fell through and exited.
I also had to make it also work on a version running ActiveState perl,
but that version of perl does not work.
See this question How to read to and write from a pipe in perl with ActiveState Perl?
#! /usr/bin/env perl
use strict;
use warnings;
my $isActiveStatePerl = defined(&Win32::BuildNumber);
sub pipeFromFork
{
return open($_[0], "-|") if (!$isActiveStatePerl);
die "active state perl cannot cope with dup file handles after fork";
pipe $_[0], my $child or die "cannot create pipe";
my $pid = fork();
die "fork failed: $!" unless defined $pid;
if ($pid) { # parent
close $child;
} else { # child
open(STDOUT, ">&=", $child) or die "cannot clone child to STDOUT";
close $_[0];
}
return $pid;
}
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel
{
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open STDIN, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
snow_fortress;
exit(0);
}
open(STDIN, "<&", $fh) or die "cannot clone to STDIN";
exec #transform or die "$0: exec: $!";
}
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open my $fh, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
hotel;
exit(0);
}
print while <$fh>;
close $fh or warn "$0: close: $!";
the simplest -- not involving all these cool internals -- way to do what the OP needs, is to use a temporary file to hold the output until the external processor is done, like so:
open ToTemp, "|/usr/bin/tac>/tmp/MyTmp$$.whee" or die "open the tool: $!";
print ToTemp $TheMessageWhateverItIs;
close ToTemp;
my $Result = `cat /tmp/MyTmp$$.whee`; # or open and read it, or use File::Slurp, etc
unlink "/tmp/MyTmp$$.whee";
Of course, this isn't going to work for something interactive, but co-routines appear to be out of the scope of the original question.
I'd like to close STDOUT to prevent my code from outputing a particular image that I need for further computation but do not want on my web page.
So i want to close STDOUT, do what I have to do with my code, then reopen STDOUT to output stuff to a web page. (Not to a file)
What I tried is:
close STDOUT;
# my code here
open STDOUT;
This doesn't work...
Thanks
There are several ways to approach your problem, and many of them do not require you to close STDOUT and risk fubaring your program's standard I/O channels.
For example, you can use the (1-arg) select command to direct the output of print commands somewhere else temporarily.
print $stuff_you_want_to_send_to_STDOUT;
select(NOT_STDOUT);
# now default print sends things to NOT_STDOUT.
# This doesn't need to be a real filehandle, though you may get warning
# messages if it is not.
...;
print $the_image_you_dont_want_to_go_to_STDOUT;
...;
select(STDOUT);
# now print sends things to STDOUT agin
print $more_stuff_you_do_want_to_go_to_STDOUT;
You can also reassign the *STDOUT glob at run-time without closing any handles.
*OLD_STDOUT = *STDOUT;
print $for_STDOUT;
*STDOUT = *NOT_STDOUT; # again, doesn't need to be a real filehandle
print $stuff_to_suppress;
*STDOUT = *OLD_STDOUT; # restore original STDOUT
print $more_stuff_for_STDOUT;
It's bad to close STDOUT since much assumes it's always open. It's better to redirect it to /dev/null (unix) or nul (Windows).
If you want to redirect the file descriptor,
use Sub::ScopeFinalizer qw( scope_finalizer );
{
open(my $backup_fh, '>&', \*STDOUT) or die $!;
my $guard = scope_finalizer { open(STDOUT, '>&', $backup_fh) or die $!; };
open(STDOUT, '>', '/dev/null') or die $!;
...
}
If you just want to redirect STDOUT,
{
local *STDOUT;
open(STDOUT, '>', '/dev/null') or die $!;
...
}
If you just want to redirect the default output handle,
use Sub::ScopeFinalizer qw( scope_finalizer );
{
open(my $null_fh, '>', '/dev/null') or die $!;
my $backup_fh = select($null_fh);
my $guard = scope_finalizer { select($backup_fh); };
...
}
You can implement something to catch STDOUT like so:
sub stdout_of (&) {
my $code = shift;
local *STDOUT;
open STDOUT, '>', \(my $stdout_string = '')
or die "reopen STDOUT: $!";
$code->();
return $stdout_string;
}
And then use it like so:
my $stdout = stdout_of { print "hello world" };
Localizing the filehandle inside stdout_of() allows you to avoid the tricks of closing and re-opening STDOUT.
Read the documentation for open.
Search for "Here is a script that saves, redirects, and restores STDOUT and STDERR using various methods".
What you want to do is not close STDOUT, but rather redirect it to /dev/null temporarily.
To (re)open STDOUT or STDERR as an in-memory file, close it first:
close STDOUT;
open STDOUT, '>', \$variable or die "Can't open STDOUT: $!";
From the perl doc: http://perldoc.perl.org/functions/open.html
You have a : after your close, don't do that. The open above should also work with jus
open STDOUT;
This thread in perl monks might help you too: http://www.perlmonks.org/?node_id=635010
I checked 2 ways:
via select
via *OLD_STDOUT = * STDOUT, and see they are not usable in common case.
The reason is these 2 approachs redirect STDOUT only if "print" or something else is used in a Perl Script. But if you use "system()" call or call subscript, their output got to standard STDOUT anyway =((.
My point of view, the indeed solution is to be:
#!/usr/bin/perl -w
my $file1 = "/tmp/out.txt";
my $file2 = "/tmp/err.txt";
open my $oldSTDOUT, ">&STDOUT";
open OLDERR, ">&",\*STDERR;
open(STDOUT, ">$file1") or print("Can't redirect stdout: to $file1 ");
open(STDERR, ">$file2") or print("Can't redirect stderr: to $file2 ");
print "THIS OUTPUT ISN'T GOT TO STANDARD OUTPUT\n";
system("pwd"); # this output isn;t got to standard output too, that is right!
close(STDOUT);
close(STDERR);
open STDOUT, ">>&", $oldSTDOUT;
open STDERR, ">>&OLDERR";
print "BUT THIS OUTPUT IS SEEN IN A STANDARD OUTPUT\n";
I checked this solution and it worked for me.
open(my $fh, '>', $path) || die $!;
my_sub($fh);
Can my_sub() somehow extrapolate $path from $fh?
A filehandle might not even be connected to a file but instead to a network socket or a pipe hooked to the standard output of a child process.
If you want to associate handles with paths your code opens, use a hash and the fileno operator, e.g.,
my %fileno2path;
sub myopen {
my($path) = #_;
open my $fh, "<", $path or die "$0: open: $!";
$fileno2path{fileno $fh} = $path;
$fh;
}
sub myclose {
my($fh) = #_;
delete $fileno2path{fileno $fh};
close $fh or warn "$0: close: $!";
}
sub path {
my($fh) = #_;
$fileno2path{fileno $fh};
}
Whoever might be looking for better way to find the file name from filehandle or file descriptor:
I would prefer to use the find -inum , if available.
Or, how about using following way, always - any drawbacks except the unix/linux compatible!
my $filename='/tmp/tmp.txt';
open my $fh, '>', $filename;
my $fd = fileno $fh;
print readlink("/proc/$$/fd/$fd");
You can call stat or IO::Handle::stat on a filehandle -- that will give you the device and inode of the file that you have opened. With that and a little operating system wizardry you can find the filename. OK, maybe a lot of operating system wizardry.
The find command has an -inum option to find a file with a specified inode number. This is probably not going to be as efficient as caching the path when you open the file, as gbacon recommends.