I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period.
If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file
and calling this Perl file (alarm.pl) in the original Perl File using the system command.
But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File.
Could anybody please let me know the way to terminate a system() call or take the output in that way I used above?
main.pl
my #output = system("alarm.pl");
print"one iperf completed\n";
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
alarm.pl
alarm 30;
my #output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001");
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
In both ways display.txt is always empty.
There are a few separate issues here.
First, to keep the alarm from killing your script, you need to handle the ALRM signal. See the alarm documentation. You shouldn't need two scripts for this.
Second, system doesn't capture output. You need one of the backtick variants or a pipe if you want to do that. There are answers for that on Stackoverflow already.
Third, if alarm.pl puts anything in display.txt, you discard it in main.pl when you re-open the file in write mode. You only need to create the file in one place. When you get rid of the extra script, you won't have this problem.
I recently had some problems with alarm and system, but switching to IPC::System::Simple fixed that.
Good luck, :)
What the hell was I thinking? You don't need a background process for this task. You just need to follow the example in the perldoc -f alarm function and wrap your time-sensitive code in an eval block.
my $command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 30;
#output = `$command`;
alarm 0;
};
if ($#) {
warn "$command timed out.\n";
} else {
print "$command successful. Output was:\n", #output;
}
Inside the eval block, you can capture your output the regular way (with backticks or qx() or readpipe). Though if the call times out, there won't be any output.
If you don't need the output (or don't mind hacking some interprocess communication together), an almost idiot-proof alternative is to set the alarm and run the system call in a child process.
$command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
if (($pid = fork()) == 0) {
# child process
$SIG{ALRM} = sub { die "Timeout\n" }; # handling SIGALRM in child is optional
alarm 30;
my $c = system($command);
alarm 0;
exit $c >> 8; # if you want to capture the exit status
}
# parent
waitpid $pid, 0;
waitpid will return when either the child's system command is finished, or when the child's alarm goes off and kills the child. $? will hold the exit code of the system call, or something else (142 on my system) for an unhandled SIGALRM or 255 if your SIGALRM handler calls die.
I run into a similar problem that requires:
run a system command and get its output
time out the system command after x seconds
kill the system command process and all child processes
After much reading about Perl IPC and manual fork & exec, I came out with this solution. It is implemented as a simulated 'backtick' subroutine.
use Error qw(:try);
$SIG{ALRM} = sub {
my $sig_name = shift;
die "Timeout by signal [$sig_name]\n";
};
# example
my $command = "vmstat 1 1000000";
my $output = backtick(
command => $command,
timeout => 60,
verbose => 0
);
sub backtick {
my %arg = (
command => undef,
timeout => 900,
verbose => 1,
#_,
);
my #output;
defined( my $pid = open( KID, "-|" ) )
or die "Can't fork: $!\n";
if ($pid) {
# parent
# print "parent: child pid [$pid]\n" if $arg{verbose};
try {
alarm( $arg{timeout} );
while (<KID>) {
chomp;
push #output, $_;
}
alarm(0);
}
catch Error with {
my $err = shift;
print $err->{-text} . "\n";
print "Killing child process [$pid] ...\n" if $arg{verbose};
kill -9, $pid;
print "Killed\n" if $arg{verbose};
alarm(0);
}
finally {};
}
else {
# child
# set the child process to be a group leader, so that
# kill -9 will kill it and all its descendents
setpgrp( 0, 0 );
# print "child: pid [$pid]\n" if $arg{verbose};
exec $arg{command};
exit;
}
wantarray ? #output : join( "\n", #output );
}
Might use "timeout -n " for wrapping your commands if thats already common on your system.
Related
I am attempting to launch a subprocess in perl that either:
Successfully runs to conclusion
Takes too long and needs to be killed
Prints an error message to STDERR and hangs and needs to be killed
I could handle just the timeout issue (2) with waitpid.
I could look for messages on STDOUT with a while (<CHILD_HANDLE>) loop.
I could read the STDOUT and STDERR by appending 2>&1 to the process string.
But, this got more complicated when I realized that after sending an error to STDERR (3), the child process does not return a line break, and then hangs for a while. I could wait for a timeout as in (2), but I'd prefer to identify the error right away. So I switched from while (<CHILD_HANDLE>) syntax to while ($next_letter = getc(<CHILD_HANDLE>)) syntax. This worked with a toy test process, but when I try it with the real command I'm monitoring, I am finding that the child process terminates after sending a single character ("\n"). This may be difficult to troubleshoot without my sharing details of the actual subprocess, but I'm hoping some guidance can be provided.
Here is the simplified code I've got:
#!/usr/bin/perl
use POSIX qw(:sys_wait_h WNOHANG);
my $TIMEOUT = 60 * 30; # 30 minutes
my $alarm_hit = 0;
my $pid = open(CHILD_HANDLE, "-|", 'command arg1 arg2 arg3 2>&1' ));
die "Could not fork\n" if not defined $pid;
if ($pid > 0) {
# set the timeout alarm
local $SIG{ALRM} = sub {kill 9, $pid; print "Killing process after timeout\n"; $alarm_hit = 1; };
alarm $TIMEOUT;
my $next_letter;
my $buffer;
# read the next character from the child
READ_LOOP: while ($next_letter = getc(CHILD_HANDLE)) {
$buffer .= $next_letter;
# if we see the error message
if ($buffer =~ /Error Message/) {
print "======= HIT ERROR MESSAGE\n";
# kill the child
close CHILD_HANDLE;
kill 'KILL', $pid;
last READ_LOOP;
# if the timeout alarm was hit
} elsif ($alarm_hit == 1) {
close READ_LOOP;
print "======= ALARM HIT\n";
last READ_LOOP;
# else print the child output when we hit a new line
} elsif ($next_letter eq "\n") {
print "> $buffer";
$buffer = '';
}
}
}
I have a Perl script, which runs an external executable. That executable runs for a while (sometimes seconds, sometimes an hour), can spit out text to both STDOUT and STDERR as well as an exit code, which all are needed. Following code demonstrates first successful external executable run (small bash script with one line - the comment), then with bad exit status (example with gs - ghostscript).
I want the external executable give its STDOUT to the Perl script for evaluation, filtering, formatting etc. before it gets logged to a logfile (used for other stuff as well) while the external is still executing. STDERR would also be great to be worked on same way.
This script is in stand to log everything from STDOUT, but only after the executable has finished. And the STDERR is logged only directly, without evaluations etc. I have no possibility to install any additional Perl parts, modules etc.
How do I get my Perl script to get each line (STDOUT + STDERR) from the executable while it is spitting it out (not just at the end) as well as its exit code for other purposes?
#!/usr/bin/perl
#array_executable_and_parameters = "/home/username/perl/myexecutable.sh" ; #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
#array_executable_and_parameters2= "gs aaa" ;
my $line;
chdir("/home/username/perl/");
$logFileName = "logfileforsomespecificinput.log";
open(LOGHANDLE, ">>$logFileName" );
open (STDERR, '>>', $logFileName); #Prints to logfile directly
#open (STDERR, '>>', <STDOUT>); #Prints to own STDOUT (screen or mailfile)
print LOGHANDLE "--------------OK run\n";
open CMD, '-|', #array_executable_and_parameters or die $#;
while (defined($line = <CMD>)) { #Logs all at once at end
print LOGHANDLE "-----\$line=$line-----\n";
}
close CMD;
$returnCode1= $?>>8;
print LOGHANDLE "\$returnCode1=$returnCode1\n";
print LOGHANDLE "--------------BAD run\n";
open CMD2, '-|', #array_executable_and_parameters2 or die $#;
while (defined($line = <CMD2>)) {
print LOGHANDLE "-----\$line=$line-----\n";
}
close CMD2;
$returnCode2= $?>>8;
print LOGHANDLE "\$returnCode2=$returnCode2\n";
close(LOGHANDLE);
Take 2. After good advice in comments I have tried the IPC::Run. But something still does not work as expected. I seem to be missing how the looping from start (or pump?) to finish works, as well as how to get it to iterate when I do not know what the last output would be - as the examples everywhere mentions. So far I have now the following code, but it does not work line by line. It spits out listing of files in one go, then waits until the external loop is fully finished to print all the X's out. How do I tame it to the initial needs?
#! /usr/bin/perl
use IPC::Run qw( start pump finish );
#array_executable_and_parameters = ();
push(#array_executable_and_parameters,"/home/username/perl/myexecutable.sh"); #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
my $h = start \#array_executable_and_parameters, \undef, \$out, \$err ;
pump $h;# while ($out or $err);
print "1A. \$out: $out\n";
print "1A. \$err: $err\n";
$out = "";
$err = "";
finish $h or die "Command returned:\n\$?=$?\n\$#=$#\nKilled by=".( $? & 0x7F )."\nExit code=".( $? >> 8 )."\n" ;
print "1B. \$out: $out\n";
print "1B. \$err: $err\n";
Look at IPC modules, especially IPC::Cmd, IPC::Run and if not satisfied then IPC::Run3. There is a lot of details you would have to cover and those modules will make your life a lot easier.
OK, have got it to work, so far. Might have some issues - not sure about environment variables, like umask or language related or the system load when push is waiting/blocking, or how to replace die with capturing of all variables for status. Nevertheless for my purpose, seems to work well. Will see how it works on a real system.
#! /usr/bin/perl
BEGIN {
push #INC, '/home/myusername/perl5/lib/perl5'; #Where the modules from Cpan are
}
use IPC::Run qw( start pump finish );
#array_executable_and_parameters = ();
push(#array_executable_and_parameters,"/home/myusername/perl/myexecutable.sh"); #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
my $h = start \#array_executable_and_parameters, \undef, \$out, \$err ;
while (42) {
pump $h;# while ($out or $err);
if ($out eq '' and $err eq '') {last;}
print "1A. \$out: $out\n";
print "1A. \$err: $err\n";
$out = "";
$err = "";
}
finish $h or die "Command returned:\n\$?=$?\n\$#=$#\nKilled by=".( $? & 0x7F )."\nExit code=".( $? >> 8 )."\n" ;
print "1B. \$out: $out\n";
print "1B. \$err: $err\n";
The key was understanding how the blocking of pump works. All the manuals and help places kind of skipped over this part. So a neverending while which jumps out when pump lets go further without output was the key.
My Perl script runs an external program (which takes a single command-line parameter) and processes its output. Originally, I was doing this:
my #result = `prog arg`;
However, turns out that the program is buggy and hangs unpredictably in rare cases. How can I kill the program if it hasn't exited after a certain amount of time? The script has to work both in Windows and in Linux, and it is my understanding that alarms and forks don't work well (or at all) in Windows.
I found a module called IPC::Run but I can't figure out how to use it properly from its documentation. :-( I tried this:
use strict;
use warnings;
use IPC::Run qw(run timeout);
my $in;
my $out;
my $err;
my #result;
my #cmd = qw(prog arg);
run \#cmd, \$in, \$out, \$err, timeout (10) or die "#cmd: $?";
push #result, $_ while (<$out>);
close $out;
print #result;
As a test, I created a program that just sleeps 60 seconds, prints a string to stdout and exits. When I try to run it with the above code, it hangs for 60 seconds (instead of for 10 seconds, as specified in the timeout) and aborts with a bizarre error:
IPC::Run: timeout on timer #1 at C:/Bin/Unix/Perl/site/lib/IPC/Run.pm line 2956
Then I found another module, Proc::Reliable. From the description, it seems to do precisely what I want. Except that it doesn't work! I tried this:
use strict;
use warnings;
use Proc::Reliable;
my $proc = Proc::Reliable->new ();
$proc->maxtime (10);
my $out = $proc->run ("prog arg");
print "$out\n";
It indeed aborts the child process after 10 seconds. So far, so good. But then I modified the external program and made it sleep for only 5 seconds. This means that the program should finish before the 10-second timeout specified in the above code and its stdout output should be captured into the variable $out. But it isn't! The above script doesn't output anything.
Any ideas how to do it properly? (Fixing the buggy external program is not an option.) Thanks in advance.
Try the poor man's alarm
my $pid;
if ($^O eq 'MSWin32') {
$pid = system 1, "prog arg"; # Win32 only, run proc in background
} else {
$pid = fork();
if (defined($pid) && $pid == 0) {
exec("proc arg");
}
}
my $poor_mans_alarm = "sleep 1,kill(0,$pid)||exit for 1..$TIMEOUT;kill -9,$pid";
system($^X, "-e", $poor_mans_alarm);
The poor man's alarm runs in a separate process. Every second, it checks whether the process with identifier $pid is still alive. If the process isn't alive, the alarm process exits. If the process is still alive after $time seconds, it sends a kill signal to the process (I used 9 to make it untrappable and -9 to take out the whole subprocess tree, your needs may vary. kill 9,... is also portable).
Edit: How do you capture the output of the process with the poor man's alarm?
Not with backticks -- then you can't get the process id and you may lose the intermediate output if the process times out and gets killed. The alternatives are
1) send output to a file, read the file when the process is done
$pid = system 1, "proc arg > some_file";
... start poor man's alarm, wait for program to finish ...
open my $fh, '<', 'some_file';
my #process_output = <$fh>;
...
2) use Perl's open to start the process
$pid = open my $proc, '-|', 'proc arg';
if (fork() == 0) {
# run poor man's alarm in a background process
exec($^X, '-e', "sleep 1,kill 0,$pid||exit ...");
}
my #process_output = ();
while (<$proc>) {
push #process_output, $_;
}
The while loop will end when the process ends, either naturally or unnaturally.
This is the best I could do. Any ideas on how to avoid the use of a temporary file on Windows would be appreciated.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp;
use Win32::Process qw(STILL_ACTIVE NORMAL_PRIORITY_CLASS);
my $pid;
my $timeout = 10;
my $prog = "prog arg";
my #output;
if ($^O eq "MSWin32")
{
my $exitcode;
my $fh = File::Temp->new ();
my $output_file = $fh->filename;
close ($fh);
open (OLDOUT, ">&STDOUT");
open (STDOUT, ">$output_file" ) || die ("Unable to redirect STDOUT to $output_file.\n");
Win32::Process::Create ($pid, $^X, $prog, 1, NORMAL_PRIORITY_CLASS, '.') or die Win32::FormatMessage (Win32::GetLastError ());
for (1 .. $timeout)
{
$pid->GetExitCode ($exitcode);
last if ($exitcode != STILL_ACTIVE);
sleep 1;
}
$pid->GetExitCode ($exitcode);
$pid->Kill (0) or die "Cannot kill '$pid'" if ($exitcode == STILL_ACTIVE);
close (STDOUT);
open (STDOUT, ">&OLDOUT");
close (OLDOUT);
open (FILE, "<$output_file");
push #output, $_ while (<FILE>);
close (FILE);
}
else
{
$pid = open my $proc, "-|", $prog;
exec ($^X, "-e", "sleep 1, kill (0, $pid) || exit for 1..$timeout; kill -9, $pid") unless (fork ());
push #output, $_ while (<$proc>);
close ($proc);
}
print "Output:\n";
print #output;
You may want to use alarm system call as in perldoc -f alarm.
I have two code
1.
use File::Temp qw(tempfile);
$tmp = new File::Temp( UNLINK => 0 );
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
2.
while (-e $file) {
sleep 2;
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
}
I would like to combine this 2 code to run tv_grab_au xmltv grabber (update TV Guide), and simultaneously, send command to tvtime for display message 'Updating TV Guide. Please wait this might take a several minutes...', every two seconds, until $file exist.
I try this one:
use strict;
use warnings;
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) {
system("tv_grab_huro | tv_sort >> $file");
unlink($file);
}
else {
while (! waitpid($pid, WNOHANG)) {
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
sleep 2;
}
}
Thanks.
The builtin fork function creates a copy of your current program in a new background process. The original process and the "child" process will then run at the same time. So you can do something like:
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
my $new_pid = fork();
die "fork failed $!" unless defined $new_pid; # this is uncommon
# Task 1 - in the background
if ($new_pid == 0) {
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
exit; # don't forget this part!
}
# Task 2 - in the foreground
while (-e $file) {
print "...";
sleep 2;
}
Using $file as an indicator of when the first task has finished has some drawbacks. What if the child code has some runtime error? What if the child process gets interrupted? The child process could exit before it gets a chance to delete $file. Then your while loop in the parent process would never end.
The builtin waitpid command can check if a child process is still running, and can handle the case where the child terminates abnormally.
# Task 2
use POSIX ':sys_wait_h';
while (! waitpid $new_pid, &WNOHANG) { # WNOHANG => non-blocking wait
print "...";
sleep 2;
}
Use fork(). I've added extra sleep() calls so you can see that the processes both run and work. In practice, the crontab update will probably run fast enough that the monitor loop doesn't run at all, or only runs once. I used "unless(...)" because it seems to me to make the code clearer; the thing to remember is that fork() returns the pid to the parent, and zero to the child. The process that doesn't see the pid is therefore a subprocess. (As has been pointed out, if the fork fails, the fork will return undef, and the code will be executing in the original process. In our case, that will simply mean that the monitoring starts up after the writing finishes, so the only thing we lose is the monitoring.)
my $file = "/tmp/.$$.crontab.txt";
my $crontab = <<EOS;
# Crontab lines here. Inserted at #{[scalar localtime()]}
EOS
my ($writer_pid, $monitor_pid);
$|++;
# Open file BEFORE launching processes. The monitor depends on the file's
# presence or absence, so if we opened it in the writer process, there'd be a
# chance the monitor process would check before we created it, and exit without
# monitoring.
die "Cannot open temp file\n" unless open(WRITE, ">" . $file);
# Crontab file handle gets passed to the forked process, so we can just use it.
# Altered so we can see the process do its thing.
unless ($writer_pid = fork()) {
print WRITE $crontab."\n";
close WRITE;
print("crontab -l |grep -v backup >> $file");
sleep 20;
print("crontab $file");
sleep 10;
unlink($file);
print "done!\n";
exit;
}
# Either file will exist, or the previous process will
# have completed. If it exists, we monitor. If not,
# we exit immediately.
unless ($monitor_pid = fork()) {
# Child: monitor the writer.
my $waitcount = 1;
while ( -e $file ) {
sleep 2;
print "($waitcount) installing crontab...";
$waitcount++;
}
print "installed\n";
exit;
}
waitpid($monitor_pid, 0);
waitpid($writer_pid,0);
print "both processes done\n";
I have a Perl script which performs some tasks, one of which is to call a system command to "tar -cvf file.tar.....".
This can often take some time so I'd like the command line to echo back a progress indicator, something like a # echoing back to screen whilst the system call is in progress.
I've been doing some digging around and stumbled across fork. Is this the best way to go? Is it possible to fork off the system command, then create a while loop which checks on the staus of the $pid returned by the fork?
I've also seen references to waitpid.... I'm guessing I need to use this also.
fork system("tar ... ")
while ( forked process is still active) {
print #
sleep 1
}
Am I barking up the wrong tree?
Many thanks
John
Perl has a nice construction for this, called "pipe opens." You can read more about it by typing perldoc -f open at a shell prompt.
# Note the use of a list for passing the command. This avoids
# having to worry about shell quoting and related errors.
open(my $tar, '-|', 'tar', 'zxvf', 'test.tar.gz', '-C', 'wherever') or die ...;
Here's a snippet showing an example:
open(my $tar, '-|', 'tar', ...) or die "Could not run tar ... - $!";
while (<$tar>) {
print ".";
}
print "\n";
close($tar);
Replace the print "." with something that prints a hash mark every 10 to 100 lines or so to get a nice gaugebar.
An example that doesn't depend on the child process writing any kind of output, and just prints a dot about once a second as long as it's running:
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
exec('long_running_command', #args)
or die "Couldn't exec: $!";
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ".";
sleep 1;
}
print "\n";
}
Although it could probably stand to have more error-checking, and there might actually be something better already on CPAN. Proc::Background seems promising for abstracting this kind of job away but I'm not sure how reliable it is.
$|++;
open(my $tar, 'tar ... |') or die "Could not run tar ... - $!";
while ($file=<$tar>) {
print "$file";
}
print "\n";
close($tar);
This prints the filenames received from tar.
For showing progress during a long-running task, you will find Term::ProgressBar useful -- it does the "printing of # across the screen" functionality that you describe.
I would try something like this
open my $tar, "tar -cvf file.tar..... 2>&/dev/null |"
or die "can't fork: $!";
my $i = 0;
while (<$tar>) {
if( i++ % 1000 == 0 ) print;
}
close $tar or die "tar error: $! $?";
Expanding on what Hobbs provided if you would like to get the data from the child process back into the Parent process you need to have an external conduit. I ended up using the tempfs because it was simple like a file, but does not put IO hits on the disk.
** Important **
You need to exit the child process, because otherwise the "child" process will continue along the same script and you will get double print statements. So in the example below foreach (#stdoutput) would happen two times despite only being in the script once.
$shm_id = time; #get unique name for file - example "1452463743"
$shm_file = "/dev/shm/$shm_id.tmp"; #set filename in tempfs
$| = 1; #suffering from buffering
print ("Activity Indicator: "); #No new line here
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
#stdoutput=`/usr/home/script.pl -o $parameter`; #get output of external command
open (SHM, ">$shm_file");
foreach (#stdoutput) {
print SHM ("$_"); #populate file in tempfs
}
close (SHM);
exit; #quit the child process (will not kill parent script)
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ("\#"); # prints a progress bar
sleep 5;
}
}
print ("\n"); #finish up bar and go to new line
open (SHM, "$shm_file");
#stdoutput = <SHM>; #Now open the file and read it. Now array is in parent
close (SHM);
unlink ($shm_file); #deletes the tempfs file
chomp(#stdoutput);
foreach (#stdoutput) {
print ("$_\n"); #print results of external script
}