How fo force subprocess to refresh stdout buffer? - subprocess

Platform: windows 8.1
IDE: vs2013
use c/c++
Process A read stdout of subprocess using pipe redirect.
but subprocess dont invoke fflush after printf, so processs A cant read anything from pipe before subprocess run to end.
ps: I have souce code of subprecess, but its very hard to modify them.
So whether Process A can force subprocess refresh stdout buffer to read something before subprocess run to end? (the same effective as fflush)

Related

Python Windows how to get STDOUT data in real time?

I have a windows executable that I want to run over and over. The problem is that sometimes there's an error about 1 second in, but the program doesn't exit. So what I would like to do is to be able to grab the contents of stdout, recognize there is an error, and then kill the subprocess and start it over.
When I run this executable, stuff prints to the screen just fine. But when I wrap it in a subprocess from python then the stdout stuff doesn't show up until the program terminates.
I've tried basically everything posted here with no luck:
Constantly print Subprocess output while process is running
Here's my current code, I replaced the executable with a second python program just to remove any other weird variables:
parent_program.py:
import subprocess, os, sys
program = "python "+os.path.dirname(os.path.abspath(__file__)) + "/child_program.py"
with subprocess.Popen(program, shell=True, stdout=subprocess.PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='')
child_program.py:
from time import sleep
for i in range(0,10):
print(i)
sleep(1)
What I would expect is that I would see 1,2,3,4... printed one second at a time, as if I had just run python child_program.py, but instead I get nothing for 10 seconds and then get all the output at once.
I also thought about trying to run the program from the CMD prompt and piping the stdout to a file python child_program.py 2>&1 > output.txt and then having python read that file, but it's the same problem, the file doesn't get written until the program terminates.
Is there any way to fix this on windows?

Can I archive output from ipython's %run (stdout,stderr) and maintain ability to debug interactively?

I use ipython --pdb -c '%run script.py' from a bash shell to launch
script.py in a manner that is very convenient for debugging (the --pdb
causes ipython to automatically drop into debugger on an exception).
I'd like the stdout and stderr from script.py to be saved to
files in addition to displaying on the screen (like using tee and
shell redirections from a bash shell).
I have two requirements that are difficult to satisfy simultaneously:
Display and capture stdout and stderr in a manner that is
transparent to script.py.
Preserve the ability to debug after an exception.
I have not found a way to accomplish (1) that does not break the interactive debugger. I believe this is because ipython (or ipdb) uses stdout to interact with the user during debugging. If I were willing to modify script.py this would not be too difficult (e.g. custom logging logic) but I'm hoping to use this very generically like shell redirection.

System call to run pbmtextps: ghostscript

I'm coding a Perl script to generate images with text in them. I'm on a Linux machine. I'm using pbmtextps. When I try to run pbmtextps in Perl with a system call like this
system("pbmtextps -fontsize 24 SampleText > out.pbm");
I get this error message
pbmtextps: failed to run Ghostscript process: rc=-1
However, if I run the exact same pbmtextps command from the command-line outside of Perl, it runs with no errors.
Why does it cause the ghostscript error when I run it from inside a Perl script?
ADDITIONAL INFO: I tried to hack around this by creating a C code called mypbmtextps.c which does the exact same thing with a C system call. That works from the command line. No errors. But then when I call that C program from the Perl script, I get the same ghostscript error.
ANSWER: I solved it. The problem was this line in the PERL script:
$SIG{CHLD} = 'IGNORE';
When I got rid of that (which I need for other things, but not in this script) it worked okay. If anyone knows why that would cause a problem, please add that explanation.
Ah-ha. Well, the SIGCHLD is required for wait(), and so required for perl to be able to retrieve the exit status of the child process created by system(). In particular, system() always returns -1 when SIGCHLD is ignored. $? will also be unavailable with SIGCHLD blocked.
What printed the error message? pbmtextps, or your perl script?
As far as I know, the signal handler for your perl process shouldn't affect the signal handler for the child processes, but this could depend on your version of perl and your OS version. On my Linux Mint 13 with Perl 5.14.2 the inner perl script prints 0, with the outer script printing -1:
perl -e '$SIG{CHLD}= "IGNORE"; print system(q{perl -e "print system(q(/bin/sleep 1))"})'
Is your perl script modifying the environment?
You can test with
system("env > /tmp/env.perl");
and then compare it to the environment form your shell:
env > /tmp/env.shell
diff /tmp/env.shell /tmp/env.perl
Is the perl script also being run from the shell, or is it being run from some other process like cron or apache? (in particular, you should check $PATH)

Can stdout and stderr from a child process be correctly interleaved? (Python 2.7, Windows)

I've seen several posts on this topic but haven't made much progress. I would like to run a command line app from a Python script and receive back:
1.) The stdout only
2.) The stderr only
3.) Stdout and stderr together
4.) The exit code from the app
I'm starting to wonder if this is even possible in Python, especially within Windows. When you have a child process that quickly interleaves stdout and stderr it seems to be really hard to preserve the ordering.

Handle signal from 'kill -3' or taskkill in Win32Console application

I have a Win32 console application (built from Visual Studio as Win32 console project) which does some log file (.txt) processing. I have a separate perl program (legacy program) which now needs to start this Win32 console application and then stop when done.
Perl program starts an instance of Win32 console app using Win32::Process APIs. It can kill the console app when done by either "kill -x pid" or Win32:Process:Kill. The problem is console app needs to know if its being killed/terminated so that it can flush log handling. The console app has already registered a handler via SetConsoleCtrlHandler API but doesn't get called when killed from perl program by say kill -2/3 pid.
What do I change in perl program or in Win32 console app so that it can know when its being terminated?
Thanks!
Signal handling in Windows is a little quirky if you're used to Unix. I have done a lot of investigation into this, and wrote up my findings here (starting at line 261).
Short answer: Windows processes can set $SIG{INT}, $SIG{QUIT}, or $SIG{BREAK}. All other signal handlers are ignored. Signal them from you separate app with the builtin kill:
kill 'INT', $the_win32_logger_pid;
kill 'QUIT', $the_win32_logger_pid;
kill 'BREAK', $the_win32_logger_pid;