I have a command line function I'd like to execute in Perl. However, I only want it to run for up to X seconds. If at X seconds, no result is returned, I want to move on. For instance, if I wanted to do something like
sub timedFunction {
my $result = `df -h`;
return $result;
}
How could I kill the wait for the command line command to finish if it's not returned any values after 3 seconds?
You want to use an alarm.
local $SIG{ALRM} = sub { die "Alarm caught. Do stuff\n" };
#set timeout
my $timeout = 5;
alarm($timeout);
# some command that might take time to finish,
system("sleep", "6");
# You may or may not want to turn the alarm off
# I'm canceling the alarm here
alarm(0);
print "See ya\n";
You obviously don't have to "die" here when the alarm signal is caught. Say get the pid of the command you called and kill it.
Here's the output from the above example:
$ perl test.pl
Alarm caught. Do stuff
$
Notice the print statement didn't execute after the system call.
It's worth noting that it isn't recommended to use alarm to time out a system call unless it's an 'eval/die' pair according to the perldoc.
Related
How I can echo a progress bar while an external process is executing and capture its STDOUT when it's done, using only standard modules. And not using fork?
Run external process, something like: #array = `ls -l`;
While it executing, do printing progress bar, like: print '.';
Capture STDOUT of the process into array, when it done
Continue works main script
I'm reading about IPC::Open2, IPC::Open3, but I don't understand how to use them for this task. Maybe it's not the right direction?
What do you have so far? If you have having trouble with the interprocess communication, forget about the progress bar for the moment and ask just about that.
You can't really have a progress bar for something that has an indeterminate end. If you don't know how much input you will read, you don't know what fraction of it you have read. People tend to think of progress bars as a representation of fraction of work done, just not activity. That is, unless you use macOS and understand that "less than one minute" means "more than three hours". ;)
I tend to do something simple, where I output a dot every so often. I don't know how many dots I'll see, but I know that I'll see new ones.
$|++; # unbuffer stdout to see dots as they are output
while( <$fh> ) {
print '.' if $. % $chunk_size; # $. is the line number
print "\n[$.] " if $. % $chunk_size * $row_length;
...
}
That $fh can be anything that you want to read from, including a pipe. perlopentut has examples of reading from external processes. Those are doing a fork, though. And, the other modules will fork as well. What's the constraint that makes you think you can't use fork?
You can get more fancy with your display by using curses and other things (a carriage return is handy :), but I'm not inclined to type those out.
Perhaps OP is looking for something of next kind just to indicate that external process is running.
Define a handler for $SIG{ALRM} and set alarm 1 to run handler every second. Once process complete reset alarm 0 to turn off alarm handler.
use strict;
use warnings;
use feature 'say';
use Data::Dumper;
my $ls_l; # variable to store output of external command
$| = 1; # unbuffered output
$SIG{ALRM} = \&handler;
alarm 1; # run handler every second
say 'Running sig_alarm_sleep';
$ls_l=`./sig_alarm_sleep`;
say ' done';
alarm 0;
my #fields = qw(rwx count user group size month day time name);
my #lines = split("\n",$ls_l);
my(#array);
for( #lines ) {
my $x->#{#fields} = split(' ',$_);
push #array, $x;
}
say Dumper(\#array);
exit 0;
sub handler {
print '.';
$SIG{ALRM} = \&handler;
alarm 1;
}
Bash script sig_alarm_sleep sample
#!/usr/bin/bash
sleep 20
ls -al
I have a long-running program that used File::Temp::tempdir to create a temporary file and sometimes interrupted it via ^C.
The following program prints the name of the temporary directory it creates and the name of a file in it.
#!/usr/bin/env perl
use strict;
use warnings;
use File::Temp qw[tempdir];
my $dir = tempdir(CLEANUP => 1);
print "$dir\n";
print "$dir/temp.txt\n";
`touch $dir/temp.txt`;
exit;
On OS X, this creates a directory inside /var/folders
If the last line is exit; or die;, then the folder will get cleaned up and the temporary file inside it will get deleted.
However, if we replace the last line with sleep 20; and then interrupt the perl program via ^C, the temporary directory remains.
% perl maketemp.pl
/var/folders/dr/cg4fl5m11vg3jfxny3ldfplc0000gn/T/ycilyLSFs6
/var/folders/dr/cg4fl5m11vg3jfxny3ldfplc0000gn/T/ycilyLSFs6/temp.txt
^C
% stat /var/folders/dr/cg4fl5m11vg3jfxny3ldfplc0000gn/T/ycilyLSFs6/temp.txt
16777220 6589054 -rw-r--r-- 1 <name> staff 0 0 "Aug 1 20:46:27 2016" "Aug 1 20:46:27 2016" "Aug 1 20:46:27 2016" "Aug 1 20:46:27 2016" 4096 0 0
/var/folders/dr/cg4fl5m11vg3jfxny3ldfplc0000gn/T/ycilyLSFs6/temp.txt
%
using a signal handler that just calls exit; does clean up the directory. E.g.
#!/usr/bin/env perl
use strict;
use warnings;
use File::Temp qw[tempdir];
$SIG{INT} = sub { exit; };
my $dir = tempdir(CLEANUP => 1);
print "$dir\n";
print "$dir/temp.txt\n";
`touch $dir/temp.txt`;
sleep 20;
As does using a "trivial" signal handler
#!/usr/bin/env perl
use strict;
use warnings;
use File::Temp qw[tempdir];
$SIG{INT} = sub { };
my $dir = tempdir(CLEANUP => 1);
print "$dir\n";
print "$dir/temp.txt\n";
`touch $dir/temp.txt`;
sleep 20;
I tried looking through the source code (https://github.com/Perl-Toolchain-Gang/File-Temp/blob/master/lib/File/Temp.pm) to determine how tempdir is registering a cleanup action
Here's the exit handler installation
https://github.com/Perl-Toolchain-Gang/File-Temp/blob/master/lib/File/Temp.pm#L1716
which calls _deferred_unlink
https://github.com/Perl-Toolchain-Gang/File-Temp/blob/master/lib/File/Temp.pm#L948
which modified the global hashes %dirs_to_unlink and %files_to_unlink, but uses the pid $$ as a key for some reason (probably in case the Perl interpreter forks? Not sure why that's necessary though since removing a directory seems like it would be an idempotent operation.)
The actual logic to clean up the files is here, in the END block.
https://github.com/Perl-Toolchain-Gang/File-Temp/blob/master/lib/File/Temp.pm#L878
A quick experiment shows that END blocks are indeed run when perl has exited normally or abnormally.
sleep 20;
END {
print "5\n";
}
# does not print 5 when interrupted
And are run here
$SIG{INT} = sub {};
sleep 20;
END {
print "5\n";
}
# does print 5 when interrupted
So ... why does the END block get skipped after a SIGINT unless there's a signal handler, even one that seems like it should do nothing?
By default, SIGINT kills the process[1]. By kill, I mean the process is immediately terminated by the kernel. The process doesn't get to perform any cleanup.
By setting a handler for SIGINT, you override this behaviour. Instead of killing the process, the signal handler is called. It might not do anything, but its very existence prevented the process from being killed. In this situation, the program won't exit as a result of the signal unless it chooses to exit (by calling die or exit in the handler. If it does, it would get a chance to cleanup as normal.
Note that if a signal for which a handler was defined comes in during a system call, the system call exits with error EINTR in order to allow the program to safely handle the signal. This is why sleep returns as soon as SIGINT is received.
If instead you had used $SIG{INT} = 'IGNORE';, the signal would have been completely ignored. Any systems calls in progress won't be interrupted.
On my system, man 1 kill lists the default actions of signals.
Your signal handler $SIG{INT} = sub {} isn't doing nothing, it is trapping the signal and preventing the program from exiting.
But to answer your original question, END blocks, as perlmod says:
is executed as late as possible, that is, after perl has finished running the program and just before the interpreter is being exited, even if it is exiting as a result of a die() function. (But not if it's morphing into another program via exec, or being blown out of the water by a signal--you have to trap that yourself (if you can).)
That is, a fatal signal, if not trapped, circumvents Perl's global destruction and does not call END blocks.
I want to have all child processes die when I kill a perl process that is using ForkManager. In the code below, if I run it and hit ctrl+c while the sleep line is running, the sleep process is killed, but the print lines are then all simultaneously executed before the script ends. Ideally, I'd like an interrupt to immediately stop all execution. What can I do?
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
system("sleep 15s");
print "Still going!"
$fork1->finish;
}
fork1->wait_all_children;
}
According to perldoc system, system actually ignores both SIGINT and SIGQUIT:
Since SIGINT and SIGQUIT are ignored during the execution of system,
if you expect your program to terminate on receipt of these signals
you will need to arrange to do so yourself based on the return value.
So if you want your processes to stop executing if you SIGINT during the system call, you need to implement that logic yourself:
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
print "Sleeping...";
system("sleep 15s") == 0 or exit($?);
print "Still going!";
$fork1->finish;
}
fork1->wait_all_children;
}
OR the more reasonable approach is the use the Perl built-in sleep:
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
print "Sleeping...";
sleep 15;
print "Still going!";
$fork1->finish;
}
fork1->wait_all_children;
}
First off - using system means you might have something strange happen, because ... then you're allowing whatever you're calling to do stuff to handle signals by itself.
That may be your problem.
However otherwise, what you can do with perl is configure signal handlers - what to do if a signal is recieved by this process. By default - signals are either set to 'exit' or 'ignore'.
You can see what this is currently via print Dumper \%SIG;
However the simplest solution to you problem I think, would be to set a handler to trap SIGINT and then send a kill to your current process group.
The behavior of kill when a PROCESS number is zero or negative depends on the operating system. For example, on POSIX-conforming systems, zero will signal the current process group, -1 will signal all processes, and any other negative PROCESS number will act as a negative signal number and kill the entire process group specified.
$SIG{'INT'} = sub {
kill ( 'TERM', -$$ );
};
I'm running a command line application from within the perl script(using system()) that sometimes doesn't return, to be precise it throws exception which requires the user input to abort the application. This script is used for automated testing of the application I'm running using the system() command. Since, it is a part of automated testing, sytem() command has to return if the exception occurs and consider the test to be fail.
I want to write a piece of code that runs this application and if exception occurs it has to continue with the script considering the this test to be failed.
One way to do this is to run the application for certain period of time and if the system call doesn't return in that period of time we should terminate the system() and continue with the script.
(How can I terminate a system command with alarm in Perl?)
code for achieving this:
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 60;
return = system("testapp.exe");
alarm 0;
};
if ($#) {
print "Test Failed";
} else {
#compare the returned value with expected
}
but this code doesn't work on windows i did some research on this and found out that SIG doesn't work for windows(book programming Perl).
could some one suggest how could I achieve this in windows?
I would recommend looking at the Win32::Process module. It allows you to start a process, wait on it for some variable amount of time, and even kill it if necessary. Based on the example the documentation provides, it looks quite easy:
use Win32::Process;
use Win32;
sub ErrorReport{
print Win32::FormatMessage( Win32::GetLastError() );
}
Win32::Process::Create($ProcessObj,
"C:\\path\\to\\testapp.exe",
"",
0,
NORMAL_PRIORITY_CLASS,
".")|| die ErrorReport();
if($ProcessObj->Wait(60000)) # Timeout is in milliseconds
{
# Wait succeeded (process completed within the timeout value)
}
else
{
# Timeout expired. $! is set to WAIT_FAILED in this case
}
You could also sleep for the appropriate number of seconds and use the kill method in this module. I'm not exactly sure if the NORMAL_PRIORITY_CLASS creation flag is the one you want to use; the documentation for this module is pretty bad. I see some examples using the DETACHED_PROCESS flag. You'll have to play around with that part to see what works.
See Proc::Background, it abstracts the code for both win32 and linux, the function is timeout_system( $seconds, $command, $arg, $arg, $arg )
I have a perl script that runs as a daemon looping all the time. I want to run a subfunction in a perl script that is based on the time (or timer) so every 2hrs it would run that subfunction and continue with it's loop. I'm thinking getting the epoch time and just checking it a few times through the loop and once it's greater then 2hrs it runs the subfunction. Is there a better way to do this in perl?
Thanks,
LF4
This depends on whether there should be 2 hours since the START of the last subroutine launch, or since the END of last execution.
1) If the latter (2 hours between the end of running the last subroutine and the start of new one), cespinoza's solution is perfectly acceptable (loop infinitely, and call sleep(7200); after executing the subroutine).
my $timeout = 7200;
while (1) {
dostuff();
sleep($timeout);
};
The only problem with this is that it can't handle the case where dostuff() takes forever, e.g. gets stuck - for the discussion of why it's an important situation to consider and approaches to solve, see below.
2) If the former (2 hours between starting points), you have three options, related to handling the subroutine run-time that exceeds 2 hours[0]. Your 3 options, explained in detail below, are to either:
2a) kick off a new subroutine while the old one keeps running (in parallel);
2b) to kick off a new subroutine AFTER the old one finishes;
2c) to kick off a new subroutine but first stop the execution of the prior one.
2a an 2c options require you to set an alarm() for 2 hours, and differ in what happens when an alarm gets triggered.
[0] NOTE: since any subroutine is likely to require at least SOME resources from the PC, there's always a - however small - chance that it would exceed 2 hours, so you have to pick one of those 3 options to handle such a scenario.
2a) Kick off every 2 hours, running in parallel with old execution if not finished.
This option is, essentially, implementing cron functionality.
Anytime you hear the word parallel, you would likely fork off the process.
my $timeout = 7200;
while (1) { # Not tested!
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
if (!defined($child_pid = fork())) {
die "cannot fork: $!\n";
} elsif (!$child_pid) { # Child
dostuff();
exit;
} # Parent continues to sleep for 2 hours
alarm $timeout; # You need it in case forking off take >2hrs
sleep; # forever
};
die unless $# eq "alarm\n"; # propagate unexpected errors
# We don't need to check if $# is true due to forever sleep
}
2b) Kick off every 2 hours, if the old one didn't finish, let it run till it finishes
This can be re-worded as "kick off task, if it finishes faster than 2 hours, sleep for the remainder"
my $timeout = 7200;
while (1) {
my $start = time;
dostuff();
my $end = time;
my $lasted = $end - $start;
if ($lasted < $timeout) {
sleep($timeout - $lasted);
}
};
2c) Kick off every two hours, if the previous one didn't finish, time it out and kill it
Whenever you see logic like this, alarm is obviously the answer.
while (1) {
my $finished = 0;
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm 7200;
dostuff();
$finished = 1;
sleep; # forever
};
die unless $# eq "alarm\n"; # propagate unexpected errors
warn "Timed out!!!\n" unless $finished
}
P.S. As cespinoza noted, you need to somehow daemonize the script (ensure it doesn't get killed when you exit the shell that started it), by either Unix means (e.g. launching it as nohup) or Perlish means (search for daemonize + Perl on Stackoverflow for mechanics of that).
Something like crontab would be best to do a timed job like that. However, if you want to run a Perl daemon, you'll have to use some kind of event handler. Two choices off the top of my head are POE and AnyEvent.
You might want to check Schedule::Cron for task planning and execution.