How can close and reopen STDOUT in Perl? - perl

I'd like to close STDOUT to prevent my code from outputing a particular image that I need for further computation but do not want on my web page.
So i want to close STDOUT, do what I have to do with my code, then reopen STDOUT to output stuff to a web page. (Not to a file)
What I tried is:
close STDOUT;
# my code here
open STDOUT;
This doesn't work...
Thanks

There are several ways to approach your problem, and many of them do not require you to close STDOUT and risk fubaring your program's standard I/O channels.
For example, you can use the (1-arg) select command to direct the output of print commands somewhere else temporarily.
print $stuff_you_want_to_send_to_STDOUT;
select(NOT_STDOUT);
# now default print sends things to NOT_STDOUT.
# This doesn't need to be a real filehandle, though you may get warning
# messages if it is not.
...;
print $the_image_you_dont_want_to_go_to_STDOUT;
...;
select(STDOUT);
# now print sends things to STDOUT agin
print $more_stuff_you_do_want_to_go_to_STDOUT;
You can also reassign the *STDOUT glob at run-time without closing any handles.
*OLD_STDOUT = *STDOUT;
print $for_STDOUT;
*STDOUT = *NOT_STDOUT; # again, doesn't need to be a real filehandle
print $stuff_to_suppress;
*STDOUT = *OLD_STDOUT; # restore original STDOUT
print $more_stuff_for_STDOUT;

It's bad to close STDOUT since much assumes it's always open. It's better to redirect it to /dev/null (unix) or nul (Windows).
If you want to redirect the file descriptor,
use Sub::ScopeFinalizer qw( scope_finalizer );
{
open(my $backup_fh, '>&', \*STDOUT) or die $!;
my $guard = scope_finalizer { open(STDOUT, '>&', $backup_fh) or die $!; };
open(STDOUT, '>', '/dev/null') or die $!;
...
}
If you just want to redirect STDOUT,
{
local *STDOUT;
open(STDOUT, '>', '/dev/null') or die $!;
...
}
If you just want to redirect the default output handle,
use Sub::ScopeFinalizer qw( scope_finalizer );
{
open(my $null_fh, '>', '/dev/null') or die $!;
my $backup_fh = select($null_fh);
my $guard = scope_finalizer { select($backup_fh); };
...
}

You can implement something to catch STDOUT like so:
sub stdout_of (&) {
my $code = shift;
local *STDOUT;
open STDOUT, '>', \(my $stdout_string = '')
or die "reopen STDOUT: $!";
$code->();
return $stdout_string;
}
And then use it like so:
my $stdout = stdout_of { print "hello world" };
Localizing the filehandle inside stdout_of() allows you to avoid the tricks of closing and re-opening STDOUT.

Read the documentation for open.
Search for "Here is a script that saves, redirects, and restores STDOUT and STDERR using various methods".
What you want to do is not close STDOUT, but rather redirect it to /dev/null temporarily.

To (re)open STDOUT or STDERR as an in-memory file, close it first:
close STDOUT;
open STDOUT, '>', \$variable or die "Can't open STDOUT: $!";
From the perl doc: http://perldoc.perl.org/functions/open.html
You have a : after your close, don't do that. The open above should also work with jus
open STDOUT;
This thread in perl monks might help you too: http://www.perlmonks.org/?node_id=635010

I checked 2 ways:
via select
via *OLD_STDOUT = * STDOUT, and see they are not usable in common case.
The reason is these 2 approachs redirect STDOUT only if "print" or something else is used in a Perl Script. But if you use "system()" call or call subscript, their output got to standard STDOUT anyway =((.
My point of view, the indeed solution is to be:
#!/usr/bin/perl -w
my $file1 = "/tmp/out.txt";
my $file2 = "/tmp/err.txt";
open my $oldSTDOUT, ">&STDOUT";
open OLDERR, ">&",\*STDERR;
open(STDOUT, ">$file1") or print("Can't redirect stdout: to $file1 ");
open(STDERR, ">$file2") or print("Can't redirect stderr: to $file2 ");
print "THIS OUTPUT ISN'T GOT TO STANDARD OUTPUT\n";
system("pwd"); # this output isn;t got to standard output too, that is right!
close(STDOUT);
close(STDERR);
open STDOUT, ">>&", $oldSTDOUT;
open STDERR, ">>&OLDERR";
print "BUT THIS OUTPUT IS SEEN IN A STANDARD OUTPUT\n";
I checked this solution and it worked for me.

Related

Perl : Store Print Outputs of a function without Changing the function

I have a sub routine :
sub application(**arguments**)
{
print ("found the black ship");
# many more print statements.
return 18000;
}
I need to get the data printed by the above subroutine in a file.
PS : I cannot change the function variables, only thing I can do is accessing the function.
As you are printing to the "default file handle", and not explicitly to STDOUT, you can just call select before you call the method. There's no need to mess around with the STDOUT file handle.
my $output = '';
open my $capture, '>', \$output;
my $old_fh = select $capture;
application(...);
select $old_fh; # restore default file handle, probably STDOUT
close $capture;
print "The output of application() was: $output\n";
OK what you really want is to redirect STDOUT to a file before calling the function, then redirect it back afterwards:
# open filehandle log.txt
open (my $LOG, '>>', 'log.txt');
# select new filehandle
select $LOG;
application();
# restore STDOUT
select STDOUT;
You can re-open STDOUT (you need to close it first though).
close STDOUT;
open STDOUT, '>>', 'somefile.txt' or die $!;
application(...);
This is all in the documentation for open().

How can I read the STDOUT of a external Program in realtime?

Let me elaborate.
Say I have perl program
(whch was shamelessly copied and edited from perl
http://perldoc.perl.org/perlfaq8.html#How-can-I-open-a-pipe-both-to-and-from-a-command%3f
)
use IPC::Open3;
use Symbol qw(gensym);
use IO::File;
local *CATCHOUT = IO::File->new_tmpfile;
local *CATCHERR = IO::File->new_tmpfile;
my $pid = open3(gensym, ">&CATCHOUT", ">&CATCHERR", "ping -t localhost");
#waitpid($pid, 0);
seek $_, 0, 0 for \*CATCHOUT, \*CATCHERR;
while( <CATCHOUT> ) {
print $_;
}
But the problem with the above program is it will to a sort of readtoEnd() of the STDOUT belonging to the program ping.exe in this case and allow it ti be read all at once.
But what I want to be able to do is to read the STDOUT as it is being written out to STDOUT.
if I remove waitforpid() then program exits immediately, so that doesn't help either.
Is that Possible ? If so, can you please point me in the right direction.
Update:
Drats!!!! I missed the | symbol... which is essential for piping the output out of ping and into the perl script!!!
use IPC::Open3 qw( open3 );
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
my $pid = open3(
'<&CHILD_STDIN',
my $child stdout,
'>&STDERR',
'ping', '-t', 'localhost',
);
while (<$child_stdout>) {
chomp;
print("Got: <<<$_>>>\n");
}
waitpid($pid, 0);
But that can be written as
open(my $ping_fh, '-|', 'ping', '-t', 'localhost') or die $!;
while (<$ping_fh>) {
chomp;
print("Got: <<<$_>>>\n");
}
close($ping_fh);
This just shows the proper usage. If these don't work, it's an unrelated problem: ping is buffering it's IO when not connected to a terminal. You can fool it using a pseudo-tty.
One of the strengths (or weaknesses) of perl is that there is more than one way to do things. This works:
perl -e 'open(F,"ping localhost|"); while(<F>) { s/ms/Milliseconds/; print $_; }'
Just put the s/ms/Milliseconds/ to show that the data is being read and changed
Not sure exactly what you have wrong with Open3

How to read to and write from a pipe in Perl?

I need to modify an existing Perl program. I want to pipe a string (which can contain multiple lines) through an external program and read the output from this program. This external program is used to modify the string. Let's simply use cat as a filter program. I tried it like this, but it doesn't work. (Output of cat goes to STDOUT instead of being read by perl.)
#!/usr/bin/perl
open(MESSAGE, "| cat |") or die("cat failed\n");
print MESSAGE "Line 1\nLine 2\n";
my $message = "";
while (<MESSAGE>)
{
$message .= $_;
}
close(MESSAGE);
print "This is the message: $message\n";
I've read that this isn't supported by Perl because it may end up in a deadlock, and I can understand it. But how do I do it then?
You can use IPC::Open3 to achieve bi-directional communication with child.
use strict;
use IPC::Open3;
my $pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR, 'cat')
or die "open3() failed $!";
my $r;
for(my $i=1;$i<10;$i++) {
print CHLD_IN "$i\n";
$r = <CHLD_OUT>;
print "Got $r from child\n";
}
This involves system programming, so it’s more than a basic question. As written, your main program doesn’t require full-duplex interaction with the external program. Dataflow travels in one direction, namely
string → external program → main program
Creating this pipeline is straightforward. Perl’s open has a useful mode explained in the “Safe pipe opens” section of the perlipc documentation.
Another interesting approach to interprocess communication is making your single program go multiprocess and communicate between—or even amongst—yourselves. The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you’ve opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
This is an open that involves a pipe, which gives nuance to the return value. The perlfunc documentation on open explains.
If you open a pipe on the command - (that is, specify either |- or -| with the one- or two-argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
To create the scaffolding, we work in right-to-left order using open to fork a new process at each step.
Your main program is already running.
Next, fork a process that will eventually become the external program.
Inside the process from step 2
First fork the string-printing process so as to make its output arrive on our STDIN.
Then exec the external program to perform its transformation.
Have the string-printer do its work and then exit, which kicks up to the next level.
Back in the main program, read the transformed result.
With all of that set up, all you have to do is implant your suggestion at the bottom, Mr. Cobb.
#! /usr/bin/env perl
use 5.10.0; # for defined-or and given/when
use strict;
use warnings;
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel {
given (open(STDIN, "-|") // die "$0: fork: $!") { # / StackOverflow hiliter
snow_fortress when 0;
exec #transform or die "$0: exec: $!";
}
}
given (open(my $fh, "-|") // die "$0: fork: $!") {
hotel when 0;
print while <$fh>;
close $fh or warn "$0: close: $!";
}
Thanks for the opportunity to write such a fun program!
You can use the -n commandline switch to effectively wrap your existing program code in a while-loop... look at the man page for -n:
LINE:
while (<>) {
... # your program goes here
}
Then you can use the operating system's pipe mechanism directly
cat file | your_perl_prog.pl
(Edit)
I'll try to explain this more carefully...
The question is not clear about what part the perl program plays: filter or final stage. This works in either case, so I will assume it is the latter.
'your_perl_prog.pl' is your existing code. I'll call your filter program 'filter'.
Modify your_perl_prog.pl so that the shebang line has an added '-n' switch: #!/usr/bin/perl -n or #!/bin/env "perl -n"
This effectively puts a while(<>){} loop around the code in your_perl_prog.pl
add a BEGIN block to print the header:
BEGIN {print "HEADER LINE\n");}
You can read each line with '$line = <>;' and process/print
Then invoke the lot with
cat sourcefile |filter|your_perl_prog.pl
I want to expand on #Greg Bacon's answer without changing it.
I had to execute something similar, but wanted to code without
the given/when commands, and also found there was explicit exit()
calls missing because in the sample code it fell through and exited.
I also had to make it also work on a version running ActiveState perl,
but that version of perl does not work.
See this question How to read to and write from a pipe in perl with ActiveState Perl?
#! /usr/bin/env perl
use strict;
use warnings;
my $isActiveStatePerl = defined(&Win32::BuildNumber);
sub pipeFromFork
{
return open($_[0], "-|") if (!$isActiveStatePerl);
die "active state perl cannot cope with dup file handles after fork";
pipe $_[0], my $child or die "cannot create pipe";
my $pid = fork();
die "fork failed: $!" unless defined $pid;
if ($pid) { # parent
close $child;
} else { # child
open(STDOUT, ">&=", $child) or die "cannot clone child to STDOUT";
close $_[0];
}
return $pid;
}
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel
{
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open STDIN, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
snow_fortress;
exit(0);
}
open(STDIN, "<&", $fh) or die "cannot clone to STDIN";
exec #transform or die "$0: exec: $!";
}
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open my $fh, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
hotel;
exit(0);
}
print while <$fh>;
close $fh or warn "$0: close: $!";
the simplest -- not involving all these cool internals -- way to do what the OP needs, is to use a temporary file to hold the output until the external processor is done, like so:
open ToTemp, "|/usr/bin/tac>/tmp/MyTmp$$.whee" or die "open the tool: $!";
print ToTemp $TheMessageWhateverItIs;
close ToTemp;
my $Result = `cat /tmp/MyTmp$$.whee`; # or open and read it, or use File::Slurp, etc
unlink "/tmp/MyTmp$$.whee";
Of course, this isn't going to work for something interactive, but co-routines appear to be out of the scope of the original question.

Perl: if a subroutine from a package prints something to my terminal, how do I use its output in my program without changing the package?

I am using a package in Perl (Biomart) that prints out the results of a query. The syntax that prints the output looks like this:
$query_runner->execute($query);
$query_runner->printResults();
And that prints the results of my query to my terminal. Instead, I would like the stuff that's printed to be printed to an output file. I tried:
$output = "#ARGV[1]";
open OUT , ">$output" or die "Can't open $output: #ARGV[1].txt!\n";
$query_runner->execute($query);
print OUT $query_runner->printResults();
But that does not seem to work, the subroutine printResults() still prints to my terminal instead of the output file. Is there a way to print its output to my outputfile without changing the subroutine of the package itself?
You can use select to set the default print filehandle, eg.
select (OUT);
You can reopen STDOUT to write to the given file, call the printing sub and then restore the old STDOUT:
open my $oldout, ">&STDOUT" or die "Can't dup STDOUT: $!";
open STDOUT, '>', $ARGV[1] or die "Can't open $ARGV[1]";
$query_runner->printResults();
open STDOUT, ">&", $oldout or die "Can't dup \$oldout: $!";
From https://github.com/pubmed2ensembl/biomart-plus-extras/blob/master/lib/BioMart/QueryRunner.pm :
sub printResults {
my ($self, $filehandle, $lines) = #_;
$filehandle ||= \*STDOUT; # in case no fhandle is provided
...
}
Thus, printResults takes an optional argument of a filehandle to output to. If not provided, it defaults to STDOUT. You would use it as:
open(my $output, ">", $ARGV[1]);
$query_runner->execute($query);
$query_runner->printResults($output);

Perl IPC::Run appending output and parsing stderr while keeping it in a batch of files

I'm trying to wrap my head around IPC::Run to be able to do the following. For a list of files:
my #list = ('/my/file1.gz','/my/file2.gz','/my/file3.gz');
I want to execute a program that has built-in decompression, does some editing and filtering to them, and prints to stdout, giving some stats to stderr:
~/myprogram options $file
I want to append the stdout of the execution for all the files in the list to one single $out file, and be able to parse and store a couple of lines in each stderr as variables, while letting the rest be written out into separate fileN.log files for each input file.
I want stdout to all go into a ">>$all_into_one_single_out_file", it's the err that I want to keep in different logs.
After reading the manual, I've gone so far as to the code below, where the commented part I don't know how to do:
for $file in #list {
my #cmd;
push #cmd, "~/myprogram options $file";
IPC::Run::run \#cmd, \undef, ">>$out",
sub {
my $foo .= $_[0];
#check if I want to keep my line, save value to $mylog1 or $mylog2
#let $foo and all the other lines be written into $file.log
};
}
Any ideas?
First things first. my $foo .= $_[0] is not necessary. $foo is a new (empty) value, so appending to it via .= doesn't do anything. What you really want is a simple my ($foo) = #_;.
Next, you want to have output go to one specific file for each command while also (depending on some conditional) putting that same output to a common file.
Perl (among other languages) has a great facility to help in problems like this, and it is called closure. Whichever variables are in scope at the time of a subroutine definition, those variables are available for you to use.
use strict;
use warnings;
use IPC::Run qw(run new_chunker);
my #list = qw( /my/file1 /my/file2 /my/file3 );
open my $shared_fh, '>', '/my/all-stdout-goes-here' or die;
open my $log1_fh, '>', '/my/log1' or die "Cannot open /my/log1: $!\n";
open my $log2_fh, '>', '/my/log2' or die "Cannot open /my/log2: $!\n";
foreach my $file ( #list ) {
my #cmd = ( "~/myprogram", option1, option2, ..., $file );
open my $log_fh, '>', "$file.log"
or die "Cannot open $file.log: $!\n";
run \#cmd, '>', $shared_fh,
'2>', new_chunker, sub {
# $out contains each line of stderr from the command
my ($out) = #_;
if ( $out =~ /something interesting/ ) {
print $log1_fh $out;
}
if ( $out =~ /something else interesting/ ) {
print $log2_fh $out;
}
print $log_fh $out;
return 1;
};
}
Each of the output file handles will get closed when they're no longer referenced by anything -- in this case at the end of this snippet.
I fixed your #cmd, though I don't know what your option1, option2, ... will be.
I also changed the way you are calling run. You can call it with a simple > to tell it the next thing is for output, and the new_chunker (from IPC::Run) will break your output into one-line-at-a-time instead of getting all the output all-at-once.
I also skipped over the fact that you're outputting to .gz files. If you want to write to compressed files, instead of opening as:
open my $fh, '>', $file or die "Cannot open $file: $!\n";
Just open up:
open my $fh, '|-', "gzip -c > $file" or die "Cannot startup gzip: $!\n";
Be careful here as this is a good place for command injection (e.g. let $file be /dev/null; /sbin/reboot. How to handle this is given in many, many other places and is beyond the scope of what you're actually asking.
EDIT: re-read problem a bit more, and changed answer to more closely reflect the actual problem.
EDIT2:: Updated per your comment. All stdout goes to one file, and the stderr from command is fed to the inline subroutine. Also fixed a stupid typo (for syntax was pseudo code not Perl).