How to create interrupt after x seconds in Perl? - perl

Let me explain task with code example:
#!/usr/bin/perl
use strict;
use warnings;
my $t = 3;
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
print "Start $t\n";
alarm(10);
sleep($t); # instead of this I have some data that collect code
alarm(0);
print "done with $t\n";
};
if ($#) {
die unless $# eq "alarm\n";
print "timeout\n";
}
Instead of sleep I have some code that push data to array. Array will be guaranteed filled by needed data during `x seconds.
Question: how to print array after x second, without using sleep (non-blocking way) ?
As far as I understand simplest way to set timer in perl is to use $SIG{ALRM}. But what to do if I don't need timer (can't use sleep), I just need to set one interrupt that must run after pre-defined amount of seconds ? Maybe I should use SIGINT for this task?
Any help appreciated.

To create your own interrupts, you need two threads of execution. One way to do this is to launch a child process that will signal its parent when some condition is met.
$SIG{USR1} = \&code_to_run_after_interrupt;
my $ppid = $$; # process id of parent
if (fork() == 0) {
# child process
sleep 15;
kill 'USR1', $ppid;
exit;
}
... main execution thread
15 seconds after the fork call, your main script will stop what it's doing, execute the code in a subroutine named code_to_run_after_interrupt, and then resume the main thread of execution.
(I use SIGUSR1 here because handling SIGINT may make you unable to use Ctrl-C to stop your program)

That's the whole point of alarm! Just use a more suitable signal handler.
my #x;
$SIG{ALRM} = sub {
print("$_\n") for #x;
$SIG{ALRM} = undef;
};
alarm(10);
...

Related

How to execute a specific subroutine code in background and get the result from it?

How to execute a specific subroutine code in background?
I'd like to fire several instances of the ping subroutine below in parallel. I know it has been asked before but I couldn't find my way in resolving this. The examples I found led me to using fork, however I want to execute in background the subroutine code only and I need someway to the get the subroutine result later on in the main code.
Thank you very much.
#!/usr/bin/perl
use strict;
use warnings;
my $ipfile = "ips.txt";
open (my $fh, '<', $ipfile) or die "I couldn't open file $ipfile\n";
chomp (my #ips = <$fh>);
close $fh;
my %pingResult;
foreach my $ip (#ips) {
ping ($ip);
}
foreach my $ip (keys %pingResult) {
print "ping result for $ip: $pingResult{$ip}\n";
}
sub ping {
my $ip = $_[0];
$pingResult{$ip} = `/sbin/ping -t 1 -c 1 $ip | grep packet`;
chomp ($pingResult{$ip});
}
The Parallel::ForkManager module provides for data to be returned to the parent process from the children. It works by serialising and writing it to a temporary file
The second parameter of the Parallel::ForkManager->new call must be the path to a directory where these temporary files may be stored, and the second parameter to a child's finish call must be a reference to the scalar value that should be returned. That scalar value may be a simple string or number, or it can be a reference to a hash or data if a complex structure must be provided
To collect the return data, you must define a run_on_finish callback which collects information about the terminating child process
In this case my ips.txt file just contains the eight letters A through to H, and I have used this code
sleep rand 5 + 2;
my $ping = rand 10;
as a representation of the ping action. The returned value is the random number $ping. The ident of each child is an index into the #ip_addresses array, and both the child and the parent can use this value to identify which address the child has processed
You may well want to return non-zero exit status values (the first parameter of the finish call) so that the parent can tell whether the ping failed completely. It's available from the $exit_code parameter of the run_on_finish callback
use strict;
use warnings 'all';
use Parallel::ForkManager;
use Cwd 'cwd';
use constant IP_FILE => 'ips.txt';
my #ip_addresses;
{
open my $fh, '<', IP_FILE or die $!;
#ip_addresses = <$fh>;
chomp #ip_addresses;
}
my %ping_results;
my $pfm = Parallel::ForkManager->new(10, cwd);
$pfm->run_on_finish( sub {
my ($pid, $exit_code, $ident, $exit_signal, $dump, $data) = #_;
$ping_results{$ip_addresses[$ident]} = $$data;
});
for my $ident ( 0 .. $#ip_addresses ) {
my $pid = $pfm->start($ident);
next if $pid;
sleep rand 5 + 2;
my $ping = rand 10;
$pfm->finish(0, \$ping);
}
$pfm->wait_all_children;
use Data::Dump;
dd \%ping_results;
output
{
A => 4.40219991930888,
B => 2.82913053498731,
C => 3.34837183912413,
D => 3.39050637182908,
E => 6.6558553334059,
F => 6.72843905721919,
G => 4.73434782211797,
H => 3.30697605942504,
}
Some people consider threads dangerous and ugly. But especially in conjunction with Thread::Queue I like their elegance:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
use threads;
use threads::shared;
use Thread::Queue;
use constant NUM_THREADS => 2;
my $workitems = Thread::Queue->new();
my %pingResult : shared;
sub main
{
# create 2 worker threads
threads->create( \&ping ) foreach ( 1 .. NUM_THREADS );
# put the IPs into our working queue:
my #ips = qw(127.0.0.1 stackoverflow.com localhost);
$workitems->enqueue(#ips);
$workitems->end();
# wait for the threads to finish:
$_->join() foreach ( threads->list() );
print Data::Dumper::Dumper( \%pingResult );
}
sub ping
{
while ( my $ip = $workitems->dequeue() ) {
my $result = `/bin/ping -t 1 -c 1 $ip | grep packet`;
chomp($result);
lock(%pingResult);
$pingResult{$ip} = $result;
}
}
main();
With T::Q::enqueue() you place "items" in the queue, in this case IP addresses. In this example you do this from the main thread. After you're done with filling up the queue you call T::Q::end().
The T::Q::dequeue() (called from the thread ping) blocks until an item can be read and removed from the queue. The ping function then calls /[s]bin/ping and puts the result into the global hash %pingResult. The variable is marked as shared which means it is shared between all threads. Because of that you need to protect it against concurrent access by means of the lock function. It blocks until no one else holds the lock. The lock automatically unlocks when it goes out of scope, that is: after every single iteration of the while loop. (There is no unlock function.)
The second ping-thread does exactly the same and it's kind of random which thread processes which IP address. They both pick IPs from the queue until T::Q::dequeue() returns some falsy value and they then exit.
The call T::Q::end() unblocks all calls to T::Q::dequeue() (no matter whether the queue is empty or not) and thus eventually ends the while loops in the threads.
Btw: although the T::Q is shared between all threads (including main), there's no need to mark it as shared or lock it because it has built-in thread-safety and does all that under the hood.
Instead of using a global shared variable for the results you could also create a second result queue and let the ping-threads put their results in there and let the main thread dequeue them one by one.
Note: When using threads it's far better to not create a new thread for every single workitem (IP address) but to create N threads in advance and then feed them with work through a queue. Thread creation is expensive. The T::Q module is just perfect for this worker-thread model.
What you need is IO::Pipe::Producer. It's designed precisely for this problem. You would have to make an adjustment to your sub to print the result and the parent script can read it in through the returned handle. You just call its getSubroutineProducer method in a loop and grab the handles for processing once the jobs all start:
use IO::Select;
use IO::Pipe::Producer;
my $obj = new IO::Pipe::Producer();
my $sel = new IO::Select;
my $pingHandle = {};
my $pingResult = {};
foreach my $ip (#ips) {
my $handle = $obj->getSubroutineProducer(\&ping,$ip);
$sel->add($handle);
$pingHandle->{$handle} = $ip;
}
The above starts the ping jobs and then you use your IO::Select object to read from them in a non-blocking way, until there's no longer any output from any of the handles. You might improve the following with a timeout, but I believe the ping command already has one...
while(my #fhs = $sel->can_read())
{
foreach my $fh (#fhs)
{
my $line = <$fh>;
unless(defined($line))
{
$sel->remove($fh);
close($fh);
next;
}
$pingResult->{$pingHandle->{$fh}} .= $line;
}
}
Note, I used a handle to IP lookup to be able to put the output from the handle in the correct position in the pingResult hash. Then all you have to do is print the results below. Note, I removed the "\n" because I edited your ping sub to print the result and the "\n" there prevents potential buffering issues. You might with to set $| to a non-zero value at the top of the script to force the buffer to flush every time.
foreach my $ip (keys %$pingResult) {
print "ping result for $ip: $pingResult->{$ip}";
}
sub ping {
my $ip = $_[0];
$mypingresult = `/sbin/ping -t 1 -c 1 $ip | grep packet`;
print($mypingresult);
}

Alarm is not triggering at time - perl

I have used an Alarm function in my script which is not triggering at the time which it should be :
Here is my code :
$SIG{ALRM} = sub{
print"*****Test Fail*****";
};
eval{
alarm(10);
getTheBootTime();
alarm(0);
};
die $# if $#;
getTheBootTime(); is taking 5 mints to gets execute. Am i doing anything wrong here?
Assuming that getTheBootTime() is a computation and does NOT tackle with alarm and/or sleep itself, the answer is like follows.
print "something"; w/o trailing \n may not output anything, as printed string gets stuck in the buffer until a newline is printed. This is the default behavior (unless $| is set to true or specific file descriptor has autoflush turned on).
Also the specified $SIG{ALRM} does NOT interrupt execution (no die there) which is what eval/alarm combination expects.
So the following may be in $SIG{ALRM}:
$SIG{ALRM} = sub {
print "alarm, no interruption, newline\n";
};
or
$SIG{ALRM} = sub {
die "alarm, interruption";
};

Why doesn't die in alarm signal handler kill the process?

From How can I specify timeout limit for Perl system call?
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm $timeout;
$nread = sysread SOCKET, $buffer, $size;
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
If a timeout happens, should sub { die "alarm\n" }; cause the end of a process. I guess I am not able to understand die. This http://www.cs.cf.ac.uk/Dave/PERL/node111.html says that "The die() function is used to quit your script and display a message for the user to read". However, in the case of the script above, the script will process the code in #timed out. Also sysread continues to work. Instead of sysread, I had a perl script that slept for 30 seconds. My timeout was set to 10 seconds. As expected, the code in #timed out is executed but the script continued to sleep.Any inputs appreciated
die doesn't cause the end of a process, it throws an exception.
Now, if nothing catches an exception, that ends a process, but you have code in place to catch this very exception.
The process doesn't end because you explicitly prevent it from ending.
Since you're not very clear on what behaviour you are getting, there could be another possibility: That you are using a Windows build of Perl.
The alarm is a Unix system call. It's very purpose (sending a signal after a certain amount of time has passed) makes no sense on Windows since Windows doesn't have signals.
Perl emulates alarm to some extent, but only in a very limited manner. sleep could very well be the only operation that's interruptable by alarm. Otherwise, the timeout is only checked between statements.
So it won't interrupt sysread, but once sysread returns, Perl notices the timeout expired and emulate a signal then.
From man alarm
alarm() arranges for a SIGALRM signal to be delivered to the calling process in seconds seconds.
Before sigalarm is delivered execution reaches else block. Insert a STDIN before sysread so that sigalarm triggers resulting expected results.
"Instead of sysread, I had a perl script that slept for 30 seconds. My
timeout was set to 10 seconds. As expected, the code in #timed out is
executed but the script continued to sleep."
Really?
#!/usr/bin/perl
use strict;
use warnings FATAL => qw(all);
eval {
open my $fh, '<', $0 || die;
local $SIG{ALRM} = sub {
print STDERR "hello!\n";
die "bye!";
};
alarm 3;
while (<$fh>) {
print $_;
sleep 1;
}
close $fh;
};
if ($#) {
print "HERE: $#\n";
}
The output:
#!/usr/bin/perl
use strict;
use warnings FATAL => qw(all);
hello!
HERE: bye! at ./test.pl line 9, <$fh> line 3.
Over in the expected 3 seconds; this is still the case if I just use "sleep 100" instead of the file read. Note that if you spawn a subprocess, alarm will not kill that and the parent process must wait. In that case, the "hello!" in the signal handler will appear when alarm fires, but the eval which catches the die will not complete until the subprocess does.
I had the same issue when porting a Linux Perl script to Windows.
I solved it by ...
Creating a non-blocking socket
$recsock = IO::Socket::INET->new(
LocalPort => 68,
Proto => "udp",
Broadcast => 1,
Blocking => 0,
) or die "socket: $#";
Adding $continue variable to the timeout handle
# Timeout handle
$SIG{ALRM} = sub {
print "timeout\n";
$continue = 1;
};
and checking for the $continue to become true when the timeout occurs:
alarm($timeout);
while(1){
$recsock->recv($newmsg, 1024);
eval {
$packet = Net::Package->new($newmsg);
...
};
sleep 0.1;
last if ($continue);
}
alarm(0);

Kill a hung child process

My Perl script runs an external program (which takes a single command-line parameter) and processes its output. Originally, I was doing this:
my #result = `prog arg`;
However, turns out that the program is buggy and hangs unpredictably in rare cases. How can I kill the program if it hasn't exited after a certain amount of time? The script has to work both in Windows and in Linux, and it is my understanding that alarms and forks don't work well (or at all) in Windows.
I found a module called IPC::Run but I can't figure out how to use it properly from its documentation. :-( I tried this:
use strict;
use warnings;
use IPC::Run qw(run timeout);
my $in;
my $out;
my $err;
my #result;
my #cmd = qw(prog arg);
run \#cmd, \$in, \$out, \$err, timeout (10) or die "#cmd: $?";
push #result, $_ while (<$out>);
close $out;
print #result;
As a test, I created a program that just sleeps 60 seconds, prints a string to stdout and exits. When I try to run it with the above code, it hangs for 60 seconds (instead of for 10 seconds, as specified in the timeout) and aborts with a bizarre error:
IPC::Run: timeout on timer #1 at C:/Bin/Unix/Perl/site/lib/IPC/Run.pm line 2956
Then I found another module, Proc::Reliable. From the description, it seems to do precisely what I want. Except that it doesn't work! I tried this:
use strict;
use warnings;
use Proc::Reliable;
my $proc = Proc::Reliable->new ();
$proc->maxtime (10);
my $out = $proc->run ("prog arg");
print "$out\n";
It indeed aborts the child process after 10 seconds. So far, so good. But then I modified the external program and made it sleep for only 5 seconds. This means that the program should finish before the 10-second timeout specified in the above code and its stdout output should be captured into the variable $out. But it isn't! The above script doesn't output anything.
Any ideas how to do it properly? (Fixing the buggy external program is not an option.) Thanks in advance.
Try the poor man's alarm
my $pid;
if ($^O eq 'MSWin32') {
$pid = system 1, "prog arg"; # Win32 only, run proc in background
} else {
$pid = fork();
if (defined($pid) && $pid == 0) {
exec("proc arg");
}
}
my $poor_mans_alarm = "sleep 1,kill(0,$pid)||exit for 1..$TIMEOUT;kill -9,$pid";
system($^X, "-e", $poor_mans_alarm);
The poor man's alarm runs in a separate process. Every second, it checks whether the process with identifier $pid is still alive. If the process isn't alive, the alarm process exits. If the process is still alive after $time seconds, it sends a kill signal to the process (I used 9 to make it untrappable and -9 to take out the whole subprocess tree, your needs may vary. kill 9,... is also portable).
Edit: How do you capture the output of the process with the poor man's alarm?
Not with backticks -- then you can't get the process id and you may lose the intermediate output if the process times out and gets killed. The alternatives are
1) send output to a file, read the file when the process is done
$pid = system 1, "proc arg > some_file";
... start poor man's alarm, wait for program to finish ...
open my $fh, '<', 'some_file';
my #process_output = <$fh>;
...
2) use Perl's open to start the process
$pid = open my $proc, '-|', 'proc arg';
if (fork() == 0) {
# run poor man's alarm in a background process
exec($^X, '-e', "sleep 1,kill 0,$pid||exit ...");
}
my #process_output = ();
while (<$proc>) {
push #process_output, $_;
}
The while loop will end when the process ends, either naturally or unnaturally.
This is the best I could do. Any ideas on how to avoid the use of a temporary file on Windows would be appreciated.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp;
use Win32::Process qw(STILL_ACTIVE NORMAL_PRIORITY_CLASS);
my $pid;
my $timeout = 10;
my $prog = "prog arg";
my #output;
if ($^O eq "MSWin32")
{
my $exitcode;
my $fh = File::Temp->new ();
my $output_file = $fh->filename;
close ($fh);
open (OLDOUT, ">&STDOUT");
open (STDOUT, ">$output_file" ) || die ("Unable to redirect STDOUT to $output_file.\n");
Win32::Process::Create ($pid, $^X, $prog, 1, NORMAL_PRIORITY_CLASS, '.') or die Win32::FormatMessage (Win32::GetLastError ());
for (1 .. $timeout)
{
$pid->GetExitCode ($exitcode);
last if ($exitcode != STILL_ACTIVE);
sleep 1;
}
$pid->GetExitCode ($exitcode);
$pid->Kill (0) or die "Cannot kill '$pid'" if ($exitcode == STILL_ACTIVE);
close (STDOUT);
open (STDOUT, ">&OLDOUT");
close (OLDOUT);
open (FILE, "<$output_file");
push #output, $_ while (<FILE>);
close (FILE);
}
else
{
$pid = open my $proc, "-|", $prog;
exec ($^X, "-e", "sleep 1, kill (0, $pid) || exit for 1..$timeout; kill -9, $pid") unless (fork ());
push #output, $_ while (<$proc>);
close ($proc);
}
print "Output:\n";
print #output;
You may want to use alarm system call as in perldoc -f alarm.

How can I terminate a system command with alarm in Perl?

I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period.
If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file
and calling this Perl file (alarm.pl) in the original Perl File using the system command.
But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File.
Could anybody please let me know the way to terminate a system() call or take the output in that way I used above?
main.pl
my #output = system("alarm.pl");
print"one iperf completed\n";
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
alarm.pl
alarm 30;
my #output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001");
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
In both ways display.txt is always empty.
There are a few separate issues here.
First, to keep the alarm from killing your script, you need to handle the ALRM signal. See the alarm documentation. You shouldn't need two scripts for this.
Second, system doesn't capture output. You need one of the backtick variants or a pipe if you want to do that. There are answers for that on Stackoverflow already.
Third, if alarm.pl puts anything in display.txt, you discard it in main.pl when you re-open the file in write mode. You only need to create the file in one place. When you get rid of the extra script, you won't have this problem.
I recently had some problems with alarm and system, but switching to IPC::System::Simple fixed that.
Good luck, :)
What the hell was I thinking? You don't need a background process for this task. You just need to follow the example in the perldoc -f alarm function and wrap your time-sensitive code in an eval block.
my $command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 30;
#output = `$command`;
alarm 0;
};
if ($#) {
warn "$command timed out.\n";
} else {
print "$command successful. Output was:\n", #output;
}
Inside the eval block, you can capture your output the regular way (with backticks or qx() or readpipe). Though if the call times out, there won't be any output.
If you don't need the output (or don't mind hacking some interprocess communication together), an almost idiot-proof alternative is to set the alarm and run the system call in a child process.
$command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
if (($pid = fork()) == 0) {
# child process
$SIG{ALRM} = sub { die "Timeout\n" }; # handling SIGALRM in child is optional
alarm 30;
my $c = system($command);
alarm 0;
exit $c >> 8; # if you want to capture the exit status
}
# parent
waitpid $pid, 0;
waitpid will return when either the child's system command is finished, or when the child's alarm goes off and kills the child. $? will hold the exit code of the system call, or something else (142 on my system) for an unhandled SIGALRM or 255 if your SIGALRM handler calls die.
I run into a similar problem that requires:
run a system command and get its output
time out the system command after x seconds
kill the system command process and all child processes
After much reading about Perl IPC and manual fork & exec, I came out with this solution. It is implemented as a simulated 'backtick' subroutine.
use Error qw(:try);
$SIG{ALRM} = sub {
my $sig_name = shift;
die "Timeout by signal [$sig_name]\n";
};
# example
my $command = "vmstat 1 1000000";
my $output = backtick(
command => $command,
timeout => 60,
verbose => 0
);
sub backtick {
my %arg = (
command => undef,
timeout => 900,
verbose => 1,
#_,
);
my #output;
defined( my $pid = open( KID, "-|" ) )
or die "Can't fork: $!\n";
if ($pid) {
# parent
# print "parent: child pid [$pid]\n" if $arg{verbose};
try {
alarm( $arg{timeout} );
while (<KID>) {
chomp;
push #output, $_;
}
alarm(0);
}
catch Error with {
my $err = shift;
print $err->{-text} . "\n";
print "Killing child process [$pid] ...\n" if $arg{verbose};
kill -9, $pid;
print "Killed\n" if $arg{verbose};
alarm(0);
}
finally {};
}
else {
# child
# set the child process to be a group leader, so that
# kill -9 will kill it and all its descendents
setpgrp( 0, 0 );
# print "child: pid [$pid]\n" if $arg{verbose};
exec $arg{command};
exit;
}
wantarray ? #output : join( "\n", #output );
}
Might use "timeout -n " for wrapping your commands if thats already common on your system.