Why is stdout.readline() for a subprocess working in foreground but not in the background? Is there an alternative? - subprocess

I have a script that I want to run in the background with "&".
In this script I'm calling a subprocess that reads the output of airodump-ng:
cmd = "sudo airodump-ng wlan1 -c 1"
airodump = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=None, stdin=None, universal_newlines=True, shell=True, bufsize=1)
This works, but once I try to read the output with:
line = airodump.stdout.readline()
it just doesn't continue.
There is no error message and I'm assuming that it's just blocking the programm from continuing?
When I start the script in the foreground without "&" everything works as it should.
I don't understand how putting the programm in the background can cause this problem?
Everything else in the script works as it should in the background, the only thing that doesn't work is the stdout.readline().
Does anyone have any ideas? I've been searching the web for weeks and have been trying a million things and can't find a reason as to why this is happening or any alternative.

Related

Python Windows how to get STDOUT data in real time?

I have a windows executable that I want to run over and over. The problem is that sometimes there's an error about 1 second in, but the program doesn't exit. So what I would like to do is to be able to grab the contents of stdout, recognize there is an error, and then kill the subprocess and start it over.
When I run this executable, stuff prints to the screen just fine. But when I wrap it in a subprocess from python then the stdout stuff doesn't show up until the program terminates.
I've tried basically everything posted here with no luck:
Constantly print Subprocess output while process is running
Here's my current code, I replaced the executable with a second python program just to remove any other weird variables:
parent_program.py:
import subprocess, os, sys
program = "python "+os.path.dirname(os.path.abspath(__file__)) + "/child_program.py"
with subprocess.Popen(program, shell=True, stdout=subprocess.PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='')
child_program.py:
from time import sleep
for i in range(0,10):
print(i)
sleep(1)
What I would expect is that I would see 1,2,3,4... printed one second at a time, as if I had just run python child_program.py, but instead I get nothing for 10 seconds and then get all the output at once.
I also thought about trying to run the program from the CMD prompt and piping the stdout to a file python child_program.py 2>&1 > output.txt and then having python read that file, but it's the same problem, the file doesn't get written until the program terminates.
Is there any way to fix this on windows?

A process that keeps coming back even if I forcefully shut it down

Everyone.
I'm trying to quit the program called 'test.exe' with the following command:
taskkill / f / im "test.exe"
And I run the following command.
tasklist
but Only the pid number changes and the program runs again.
Is there a way to force this to end?

Forgot to use & after command, need to send process to background

I have been running a program using nohup but I forgot to add & after the command so the terminal is stuck on the process that has been running for hours. the script I am running in python generates 5 processes each time.
Is there anyway I can make the entire script to continue in the background (get the same effect as an &) without killing and rerunning the process.
Hit Ctrl-Z to suspend the process.
Then bg to tell it to run again as a background process.

Odd behavior with Perl system() command

Note that I'm aware that this is probably not the best or most optimal way to do this but I've run into this somewhere before and I'm curious as to the answer.
I have a perl script that is called from an init that runs and occasionally dies. To quickly debug this, I put together a quick wrapper perl script that basically consists of
#$path set from library call.
while(1){
system("$path/command.pl " . join(" ",#ARGV) . " >>/var/log/outlog 2>&1");
sleep 30; #Added this one later. See below...
}
Fire this up from the command line and it runs fine and as expected. command.pl is called and the script basically halts there until the child process dies then goes around again.
However, when called from a start script (actually via start-stop-daemon), the system command returns immediately, leaving command.pl running. Then it goes around for another go. And again and again. (This was not fun without the sleep command.). ps reveals the parent of (the many) command.pl to be 1 rather than the id of the wrapper script (which it is when I run from the command line).
Anyone know what's occurring?
Maybe the command.pl is not being run successfully. Maybe the file doesn't have execute permission (do you need to say perl command.pl?). Maybe you are running the command from a different directory than you thought, and the command.pl file isn't found.
There are at least three things you can check:
standard error output of your command. For now you are swallowing it by saying 2>&1. Remove that part and observe what errors the system command produces.
the return value of system. The command may run and system may still return an exit code, but if system returns 0, you know the command was successful.
Perl's error variable $!. If there was a problem, Perl will set $!, which may or may not be helpful.
To summarize, try:
my $ec = system("command.pl >> /var/log/outlog");
if ($ec != 0) {
warn "exit code was $ec, \$! is $!";
}
Update: if multiple instance of the command keep showing up in your ps output, then it sounds like the program is forking and running itself in the background. If that is indeed what the command is supposed to do, then what you do NOT want to do is run this command in an endless loop.
Perhaps when run from a deamon the "system" command is using a different shell than the one used when you are running as yourself. Maybe the shell used by the daemon does not recognize the >& construct.
Instead of system("..."), try exec("...") function if that works for you.

python subprocess communicate hangs calling shell script

Using python 3.2, and the following code snippet:
p = subprocess.Popen(['../start_server.sh'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err = p.communicate()
if out != None :
out = out.decode('utf-8')
if err != None :
err = err.decode('utf-8')
print('out ',out)
print('err ',err)
on some shell scripts, it works just fine and I get my output. on others it just hangs. but in every case the shell script runs from the command line with no errors. The only commonality i can see is (usually) the ones that hang have zero output. When stuff fails, I check running processes and i see my shell script is not listed and the python script is still running
Whats a reliable way to call a shell script and always return control to my python program?
Edit:
Using pipes Popen and such is not a requirement, the only requirement is that control is returned to my python script when the shell script exits. If the shell script never returns to the command prompt, then my python script will also never return.
So assuming the shell script(s) I am calling always return to the command prompt, how can I get control back to my python program?
If theres a better way that what ive listed above -- please enlighten me
One additional bit ive found is the shell scripts that "hang" seem to end with a call to 'nohup' Ye they return to the command prompt with no issues.
Whats a reliable way to call a shell script and always return control
to my python program?
If you are using pipes, this will depend on your scripts; a more general answer is essentially the halting problem and even the mighty StackOverflow can't help you with that.
I would encourage you to dig deeper and try to create a reproducible case so that we can help you solve the particular problem you're seeing.
Edit
If you don't need pipes, then just omit the stdout and stderr parameters (or set them to something other than PIPE). See python subprocess management.