redirected STDOUT without ending EOL rests in a buffer and is not shown - redirect

I am writing a program (a commandline frontend) which redirects stdout and stdin of arbitrary commandline programs. The problem is platform-independent. To understand the problem, I have written a simple commandline program where the problem occurs:
#include <stdio.h>
main () {
char Expression[200];
printf ("Enter first expression: "); scanf ("%s", Expression);
printf ("You have entered: %s\n", Expression);
printf ("\n");
printf ("(Now query with Stderr)\n");
fprintf (stderr, "Enter second expression: "); scanf ("%s", Expression);
printf ("You have entered: %s\n", Expression);
}
The stdout "Enter the first expression" rests in the output buffer and is not sent via redirected stdout pipe to my program. So the first question is "Enter second expression" because the buffer problem only occurs with stdout, not with stderr. The buffer content (first query) is sent if the user has typed the input and presses RETURN. So stdout only runs through the pipe after EOL, stderr shows the output immediately.
If you run Info-Zip UNZIP and you are unzipping files which already exist, UNZIP sends a query:
replace myfile.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename:
and this query is sent via stderr, so the problem does not occur. But other programs run into the problem.
The commandline frontend of Windows 10 has been rewritten. If such a situation occurs (e.g. using the cmd.exe integrated "copy" command when overwriting an existing file), the new commandline frontend waits 3 seconds (!) and then shows the buffered query. So it seems that the Microsoft programmers need to write a "dirty hack" to solve this problem.
My question is: How to force the user program to spit out the stdout buffer if no EOL has been received? I have full access to the session where the user program runs by a special helper program in the same session which enables communication between my graphical frontend program and the text window where cmd.exe runs.

I realized now that the "fflush" command has to be placed in the user program. Stdout is always buffered when redirecting, Stderr is not. That is the reason that stderr output can happen before the stdout output. I wrote a test sample where the stderr output is shown 4 lines before it should be shown.
So that means that all programmers which are writing to stdout and stderr or write user queries have to flush stdout:
//first case: query and user input
char Expression[200];
printf ("Enter an expression: "); fflush (stdout); scanf ("%s", Expression);
printf ("You have entered: %s\n", Expression);
//second case: stderr output
fflush (stdout); fprintf (stderr, "An error has occured.");
I checked Internet postings and it seems that this problem occurs on all platforms.
So if you write programs with stdout and stderr, use fflush(stdout) in these two cases. The reverse case (using stderr and then stdout) is no problem, because stderr does not get buffered. (I hope I am right for all platforms.)

No, I have no option to override the programs the users execute in my commandline frontend program.
Everyone can redirect stdout and stderr to a single text file:
[C:\] userprog >output.txt 2>&1
That means both stdout and stderr is redirected to one file "output.txt". If stdout is not flushed correctly, the incorrect order is shown in the text file.
I have realized that all programs from professional source are flushing stdout in these cases and generate a proper order.

Related

Running external program in Perl without simultaneous output

I have a Perl script
for my $i (1..10) {
print "How $i\n";
sleep(1);
}
I want to run it from another perl script and capture its output. I know that I can use qx() or backticks, but they will return all the output simultaneously when the program exits.
But, what I instead want is to print the output of the first script from the second script as soon as they are available ie. for the example code the output of the first script is printed in ten steps from the second script and not in one go.
I looked at this question and wrote my second script as
my $cmd = "perl a.pl";
open my $cmd_fh, "$cmd |";
while(<$cmd_fh>) {
print "Received: $_\n";
STDOUT->flush();
}
close $cmd_fh;
However, the output is still being printed simultaneously. I wanted to know a way to get this done ?
The child sends output in chunks of 4 or KiB or 8 KiB rather than a line at a time. Perl programs, like most programs, flush STDOUT on linefeed, but only when connected to a terminal; they fully buffer the output otherwise. You can add $| = 1; to the child to disable buffering of STDOUT.
If you can't modify the child, such programs can be fooled by using pseudo-ttys instead of pipes. See IPC::Run. (Search its documentation for "pty".)

Perl STDOUT redirected to a pipe, no output after calling sleep()

I'm having problems with Perl on Windows (both ActivePerl and Strawberry), when redirecting a script STDOUT to a pipe, and using sleep(). Try this:
perl -e "for (;;) { print 'Printing line ', $i++, \"\n\"; sleep(1); }"
This works as expected. Now pipe it to Tee (or some file, same result):
perl -e "for (;;) { print 'Printing line ', $i++, \"\n\"; sleep(1); }" | tee
There's no output at all, tee captures nothing. However, the perl script is still running, only there's nothing on STDOUT until the script finishes, and then all output is dumped to tee. Except, if the STDOUT buffer fills the script might hang.
Now, if you remove the sleep(call), the pipe works as expected! What's going on?
I found a workaround; disabling the STDOUT buffering with $|=1 makes the pipe work when using the sleep, but... why? Can anyone explain and offer a better solution?
You are suffering from buffering. Add $| = 1; to unbuffer STDOUT.
All file handles except STDERR are buffered by default, but STDOUT uses a minimal form of buffering (flushed by newlines) when connected to a terminal. By substituting the terminal for a pipe, normal buffering is reinstated.
Removing the sleep call doesn't change anything except to speeds things up. Instead of taking minutes to fill up the buffer, it takes milliseconds. With or without it, the output is still written in 4k or 8k blocks (depending on your version of Perl).

Perl: Testing an input reader?

Is there a way to automatically test using the standard Test etc. modules whether a Perl program is reading input from e.g. STDIN properly? E.g. testing a program that reads two integers from STDIN and prints their sum.
It's not 100% clear what you mean, I'll asnswer assuming you want to write a test script that tests your main program, which as part of the test needs to have test input data passed via STDIN.
You can easily do that if your program outputs what it reads. You don't need a special test module - simply:
Call your program your're testing via a system call
redirect both STDIN and STDOUT of tested program to your test script, using
IPC::Open2 module to open both sides via pipes to filehandles,
... OR, build your command to redirect to/from files and read/write the files in the test script
Check STDOUT from tested program that you collected in the last step to make sure correct values are printed.
If you want to test if STDIN is connected to a terminal, use -t, as in:
if( -t STDIN ){
print "Input from a terminal\n";
}else{
print "Input from a file or pipe\n";
}
See http://perldoc.perl.org/perlfunc.html#Alphabetical-Listing-of-Perl-Functions

Reopen STDERR/STDOUT to write to combined logfile with timestamps

I basically want to reopen STDERR/STDOUT so they write to one logfile with both the stream and the timestamp included on every line. So print STDERR "Hello World" prints STDERR: 20130215123456: Hello World. I don't want to rewrite all my print statements into function calls, also some of the output will be coming from external processes via system() calls anyway which I won't be able to rewrite.
I also need for the output to be placed in the file "live", i.e. not only written when the process completes.
(p.s. I'm not asking particularly for details of how to generate timestamps, just how to redirect to a file and prepend a string)
I've worked out the following code, but it's messy:
my $mode = ">>";
my $file = "outerr.txt";
open(STDOUT, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDOUT: \$\_"; }'));
open(STDERR, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDERR: \$\_"; }'));
(The above doesn't add dates, but that should be trivial to add)
I'm looking for a cleaner solution, one that doesn't require quoting perl code and passing it on the command line, or at least module that hides some of the complexity. Looking at the code for Capture::Tiny it doesn't look like it can handle writing a part of output, though I'm not sure about that. annotate-output only works on an external command sadly, I need this to work on both external commands and ordinary perl printing.
The child launched via system doesn't write to STDOUT because it does not have access to variables in your program. Therefore, means having code run on a Perl file handle write (e.g. tie) won't work.
Write another script that runs your script with STDOUT and STDERR replaced with pipes. Read from those pipes and print out the modified output. I suggest using IPC::Run to do this, because it'll save you from using select. You can get away without it if you combine STDOUT and STDERR in one stream.

Is there a way to output debug messages in Perl that are not piped?

Is there a way to output debug messages in Perl that are not piped? I have a Perl script that I use in a pipe but I really want to print some diagnostic information to the screen instead of to the pipe.
Are you piping both stdout and stderr? If not, write to the one you're not piping :)
e.g.
print STDERR "This goes to standard error";
print STDOUT "This goes to standard output";
(If you don't provide a handle, STDOUT is the default of course - unless you've asked Perl to use a different default handle...)
Unless you have said something like 2>&1 on the commandline, STDERR should show up on the screen. You can write to STDERR like Jon Skeet suggests or you can use the warn function.