I have a Perl script which performs some tasks, one of which is to call a system command to "tar -cvf file.tar.....".
This can often take some time so I'd like the command line to echo back a progress indicator, something like a # echoing back to screen whilst the system call is in progress.
I've been doing some digging around and stumbled across fork. Is this the best way to go? Is it possible to fork off the system command, then create a while loop which checks on the staus of the $pid returned by the fork?
I've also seen references to waitpid.... I'm guessing I need to use this also.
fork system("tar ... ")
while ( forked process is still active) {
print #
sleep 1
}
Am I barking up the wrong tree?
Many thanks
John
Perl has a nice construction for this, called "pipe opens." You can read more about it by typing perldoc -f open at a shell prompt.
# Note the use of a list for passing the command. This avoids
# having to worry about shell quoting and related errors.
open(my $tar, '-|', 'tar', 'zxvf', 'test.tar.gz', '-C', 'wherever') or die ...;
Here's a snippet showing an example:
open(my $tar, '-|', 'tar', ...) or die "Could not run tar ... - $!";
while (<$tar>) {
print ".";
}
print "\n";
close($tar);
Replace the print "." with something that prints a hash mark every 10 to 100 lines or so to get a nice gaugebar.
An example that doesn't depend on the child process writing any kind of output, and just prints a dot about once a second as long as it's running:
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
exec('long_running_command', #args)
or die "Couldn't exec: $!";
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ".";
sleep 1;
}
print "\n";
}
Although it could probably stand to have more error-checking, and there might actually be something better already on CPAN. Proc::Background seems promising for abstracting this kind of job away but I'm not sure how reliable it is.
$|++;
open(my $tar, 'tar ... |') or die "Could not run tar ... - $!";
while ($file=<$tar>) {
print "$file";
}
print "\n";
close($tar);
This prints the filenames received from tar.
For showing progress during a long-running task, you will find Term::ProgressBar useful -- it does the "printing of # across the screen" functionality that you describe.
I would try something like this
open my $tar, "tar -cvf file.tar..... 2>&/dev/null |"
or die "can't fork: $!";
my $i = 0;
while (<$tar>) {
if( i++ % 1000 == 0 ) print;
}
close $tar or die "tar error: $! $?";
Expanding on what Hobbs provided if you would like to get the data from the child process back into the Parent process you need to have an external conduit. I ended up using the tempfs because it was simple like a file, but does not put IO hits on the disk.
** Important **
You need to exit the child process, because otherwise the "child" process will continue along the same script and you will get double print statements. So in the example below foreach (#stdoutput) would happen two times despite only being in the script once.
$shm_id = time; #get unique name for file - example "1452463743"
$shm_file = "/dev/shm/$shm_id.tmp"; #set filename in tempfs
$| = 1; #suffering from buffering
print ("Activity Indicator: "); #No new line here
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) { # Child
#stdoutput=`/usr/home/script.pl -o $parameter`; #get output of external command
open (SHM, ">$shm_file");
foreach (#stdoutput) {
print SHM ("$_"); #populate file in tempfs
}
close (SHM);
exit; #quit the child process (will not kill parent script)
} else { # Parent
while (! waitpid($pid, WNOHANG)) {
print ("\#"); # prints a progress bar
sleep 5;
}
}
print ("\n"); #finish up bar and go to new line
open (SHM, "$shm_file");
#stdoutput = <SHM>; #Now open the file and read it. Now array is in parent
close (SHM);
unlink ($shm_file); #deletes the tempfs file
chomp(#stdoutput);
foreach (#stdoutput) {
print ("$_\n"); #print results of external script
}
Related
I need to run my perl tests in parallel and capture STDOUT and STDERR in a separate file for each test file. I'm having no success even in capturing in one file. I've been all over SO and have had no luck. Here is where I started (I'll spare you all the variations). Any help is greatly appreciated. Thanks!
foreach my $file ( #files) {
next unless $file =~ /\.t$/;
print "\$file = $file\n";
$file =~ /^(\w+)\.\w+/;
my $file_pfx = $1;
my $this_test_file_name = $file_pfx . '.txt';
system("perl $test_dir\\$file > results\\$test_file_name.txt &") && die "cmd failed: $!\n";
}
Here is a simple example using Parallel::ForkManager to spawn separate processes.
In each process the STDOUT and STDERR streams are redirected, in two ways for a demo: STDOUT to a variable, that can then be passed around as desired (here dumped into a file), and STDERR directly to a file. Or use a library, with an example in a separate code snippet.
The numbers 1..6 represent batches of data that each child will pick from to process. Only three processes are started right away and then as one finishes another one is started in its place.† (Here they exit nearly immediately, the "jobs" being trivial.)
use warnings;
use strict;
use feature 'say';
use Carp qw(carp)
use Path::Tiny qw(path);
use Parallel::ForkManager;
my $pm = Parallel::ForkManager->new(3);
foreach my $data (1..6) {
$pm->start and next; # start a child process
proc_in_child($data); # code that runs in the child process
$pm->finish; # exit it
}
$pm->wait_all_children; # reap all child processes
say "\nParent $$ done\n";
sub proc_in_child {
my ($data) = #_;
say "Process $$ with data $data"; # still shows on terminal
# Will dump all that was printed to streams to these files
my (outfile, $errfile) =
map { "proc_data-${data}_" . $_ . ".$$.out" } qw(stdout stderr);
# Redirect streams
# One way to do it, redirect to a variable (for STDOUT)...
open my $fh_stdout, ">", \my $so or carp "Can't open handle to variable: $!";
my $fh_STDOUT = select $fh_stdout;
# ...another way to do it, directly to a file (for any stream)
# (first 'dup' it so it can be restored if needed)
open my $SAVEERR, ">&STDERR" or carp "Can't dup STDERR: $!";
open *STDERR, ">", $errfile or carp "Can't redirect STDERR to $errfile: $!";
# Prints wind up in a variable (for STDOUT) and a file (for STDERR)
say "STDOUT: Child process with pid $$, processing data #$data";
warn "STDERR: Child process with pid $$, processing data #$data";
close $fh_stdout;
# If needed to restore (not in this example which exits right away)
select $fh_STDOUT;
open STDERR, '>&', $SAVEERR or carp "Can't reopen STDERR: $!";
# Dump all collected STDOUT to a file (or pass it around, it's a variable)
path( $outfile )->spew($so);
return 1
}
While STDOUT is redirected to a variable, STDERR cannot be redirected that way and here it goes directly to a file. See open. However there are ways to capture it in a variable as well.
Then you can use the module's ability to return from child processes to the parent, which can then handle those variables. See for example this post and this post and this post. (There's way more, these are merely the ones I know.) Or indeed just dump them to files, as done here.
Another way is to use modules that can run code and redirect output, like Capture::Tiny
use Capture::Tiny qw(capture);
sub proc_in_child {
my ($data) = #_;
say "Process $$ with data $data"; # on terminal
# Run code and capture all output
my ($stdout, $stderr, #results) = capture {
say "STDOUT: Child process $$, processing data #$data";
warn "STDERR: Child process $$, processing data #$data";
# return results perhaps...
1 .. 4;
}
# Do as needed with variables with collected STDOUT and STDERR
# Return to parent, or dump to file:
my ($outfile, $errfile) =
map { "proc_data-${data}_" . $_ . ".$$.out" } qw(stdout stderr);
path($outfile) -> spew( $stdout );
path($errfile) -> spew( $stderr );
return 1
}
† This keeps the same number of processes running. Or, one can set it up to wait for the whole batch to finish and then start another batch. For some details of operation see this post
I think, the easiest way is to use shell redirects in your 'system' command. BTW, spawning uncontrolled subprocesses from it with '&' makes me frown.
Here is a simple example of with shell redirects and fork.
#!/usr/bin/perl
use strict;
for my $i (0..2) {
my $stdoutName = "stdout$i.txt";
my $stderrName = "stderr$i.txt";
my $pid = fork();
if($pid == 0) {
system("perl mytest.pl 1>$stdoutName 2>$stderrName"); #redirects are here 1> (stdout) and 2> (stderr)
exit $?;
}
}
it can display text in file, however, after i add new text in gedit, it do not show the updated one.
sub start_thread {
my #args = #_;
print('Thread started: ', #args, "\n");
open(my $myhandle,'<',#args) or die "unable to open file"; # typical open call
for (;;) {
while (<$myhandle>) {
chomp;
print $_."\n";
}
sleep 1;
seek FH, 0, 1; # this clears the eof flag on FH
}
}
update video
https://docs.google.com/file/d/0B4hnKBXrOBqRWEdjTDFIbHJselk/edit?usp=sharing
https://docs.google.com/file/d/0B4hnKBXrOBqRcEFhU3k4dUN4cXc/edit?usp=sharing
how to print $curpos for updated data
for (;;) {
for ($curpos = tell($myhandle); $_ = <$myhandle>;
$curpos = tell($myhandle)) {
# search for some stuff and put it into files
print $curpos."\n";
}
sleep(1);
seek(FILE, $curpos, 0);
}
Like I said - it works for me. Changes to your script are minimal - just minimal cleanup.
Script: test_tail.pl
#!/usr/bin/perl
sub tail_file {
my $filename = shift;
open(my $myhandle,'<',$filename) or die "unable to open file"; # typical open call
for (;;) {
print "About to read file...\n";
while (<$myhandle>) {
chomp;
print $_."\n";
}
sleep 1;
seek $myhandle, 0, 1; # this clears the eof flag on FH
}
}
tail_file('/tmp/test_file.txt');
Then:
echo -e "aaa\nbbb\nccc\n" > /tmp/test_file.txt
# wait a bit
echo -e "ddd\neee\n" >> /tmp/test_file.txt
Meanwhile (in a different terminal);
$ perl /tmp/test_tail.pl
About to read file...
aaa
bbb
ccc
About to read file...
About to read file...
About to read file...
ddd
eee
Instead of this:
seek $myhandle, 0, 1; # this clears the eof flag on FH
Can you try something like this:
my $pos = tell $myhandle;
seek $myhandle, $pos, 0; # reset the file handle in an alternate way
The file system is trying to give you a consistent view of the file you are reading. To see the changes, you would need to reopen the file.
To see an example of this, try the following:
1.Create a file that has 100 lines of text in it, a man page, for instance:
man tail > foo
2.Print the file slowly:
cat foo | perl -ne 'print; sleep 1;'
3.While that is going on, in another shell or editor, try editing the file by deleting most lines
Result: The file will continue to print slowly, as if you never edited it. Only when you try to print it again, will you see the changes.
The following would also work:
my $TAIL = '/usr/bin/tail -f'; # Adjust accordingly
open my $fh, "$TAIL |"
or die "Unable to run $TAIL : $!";
while (<$fh>) {
# do something
}
I need to modify an existing Perl program. I want to pipe a string (which can contain multiple lines) through an external program and read the output from this program. This external program is used to modify the string. Let's simply use cat as a filter program. I tried it like this, but it doesn't work. (Output of cat goes to STDOUT instead of being read by perl.)
#!/usr/bin/perl
open(MESSAGE, "| cat |") or die("cat failed\n");
print MESSAGE "Line 1\nLine 2\n";
my $message = "";
while (<MESSAGE>)
{
$message .= $_;
}
close(MESSAGE);
print "This is the message: $message\n";
I've read that this isn't supported by Perl because it may end up in a deadlock, and I can understand it. But how do I do it then?
You can use IPC::Open3 to achieve bi-directional communication with child.
use strict;
use IPC::Open3;
my $pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR, 'cat')
or die "open3() failed $!";
my $r;
for(my $i=1;$i<10;$i++) {
print CHLD_IN "$i\n";
$r = <CHLD_OUT>;
print "Got $r from child\n";
}
This involves system programming, so it’s more than a basic question. As written, your main program doesn’t require full-duplex interaction with the external program. Dataflow travels in one direction, namely
string → external program → main program
Creating this pipeline is straightforward. Perl’s open has a useful mode explained in the “Safe pipe opens” section of the perlipc documentation.
Another interesting approach to interprocess communication is making your single program go multiprocess and communicate between—or even amongst—yourselves. The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you’ve opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
This is an open that involves a pipe, which gives nuance to the return value. The perlfunc documentation on open explains.
If you open a pipe on the command - (that is, specify either |- or -| with the one- or two-argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
To create the scaffolding, we work in right-to-left order using open to fork a new process at each step.
Your main program is already running.
Next, fork a process that will eventually become the external program.
Inside the process from step 2
First fork the string-printing process so as to make its output arrive on our STDIN.
Then exec the external program to perform its transformation.
Have the string-printer do its work and then exit, which kicks up to the next level.
Back in the main program, read the transformed result.
With all of that set up, all you have to do is implant your suggestion at the bottom, Mr. Cobb.
#! /usr/bin/env perl
use 5.10.0; # for defined-or and given/when
use strict;
use warnings;
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel {
given (open(STDIN, "-|") // die "$0: fork: $!") { # / StackOverflow hiliter
snow_fortress when 0;
exec #transform or die "$0: exec: $!";
}
}
given (open(my $fh, "-|") // die "$0: fork: $!") {
hotel when 0;
print while <$fh>;
close $fh or warn "$0: close: $!";
}
Thanks for the opportunity to write such a fun program!
You can use the -n commandline switch to effectively wrap your existing program code in a while-loop... look at the man page for -n:
LINE:
while (<>) {
... # your program goes here
}
Then you can use the operating system's pipe mechanism directly
cat file | your_perl_prog.pl
(Edit)
I'll try to explain this more carefully...
The question is not clear about what part the perl program plays: filter or final stage. This works in either case, so I will assume it is the latter.
'your_perl_prog.pl' is your existing code. I'll call your filter program 'filter'.
Modify your_perl_prog.pl so that the shebang line has an added '-n' switch: #!/usr/bin/perl -n or #!/bin/env "perl -n"
This effectively puts a while(<>){} loop around the code in your_perl_prog.pl
add a BEGIN block to print the header:
BEGIN {print "HEADER LINE\n");}
You can read each line with '$line = <>;' and process/print
Then invoke the lot with
cat sourcefile |filter|your_perl_prog.pl
I want to expand on #Greg Bacon's answer without changing it.
I had to execute something similar, but wanted to code without
the given/when commands, and also found there was explicit exit()
calls missing because in the sample code it fell through and exited.
I also had to make it also work on a version running ActiveState perl,
but that version of perl does not work.
See this question How to read to and write from a pipe in perl with ActiveState Perl?
#! /usr/bin/env perl
use strict;
use warnings;
my $isActiveStatePerl = defined(&Win32::BuildNumber);
sub pipeFromFork
{
return open($_[0], "-|") if (!$isActiveStatePerl);
die "active state perl cannot cope with dup file handles after fork";
pipe $_[0], my $child or die "cannot create pipe";
my $pid = fork();
die "fork failed: $!" unless defined $pid;
if ($pid) { # parent
close $child;
} else { # child
open(STDOUT, ">&=", $child) or die "cannot clone child to STDOUT";
close $_[0];
}
return $pid;
}
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel
{
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open STDIN, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
snow_fortress;
exit(0);
}
open(STDIN, "<&", $fh) or die "cannot clone to STDIN";
exec #transform or die "$0: exec: $!";
}
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open my $fh, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
hotel;
exit(0);
}
print while <$fh>;
close $fh or warn "$0: close: $!";
the simplest -- not involving all these cool internals -- way to do what the OP needs, is to use a temporary file to hold the output until the external processor is done, like so:
open ToTemp, "|/usr/bin/tac>/tmp/MyTmp$$.whee" or die "open the tool: $!";
print ToTemp $TheMessageWhateverItIs;
close ToTemp;
my $Result = `cat /tmp/MyTmp$$.whee`; # or open and read it, or use File::Slurp, etc
unlink "/tmp/MyTmp$$.whee";
Of course, this isn't going to work for something interactive, but co-routines appear to be out of the scope of the original question.
I have two code
1.
use File::Temp qw(tempfile);
$tmp = new File::Temp( UNLINK => 0 );
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
2.
while (-e $file) {
sleep 2;
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
}
I would like to combine this 2 code to run tv_grab_au xmltv grabber (update TV Guide), and simultaneously, send command to tvtime for display message 'Updating TV Guide. Please wait this might take a several minutes...', every two seconds, until $file exist.
I try this one:
use strict;
use warnings;
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) {
system("tv_grab_huro | tv_sort >> $file");
unlink($file);
}
else {
while (! waitpid($pid, WNOHANG)) {
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
sleep 2;
}
}
Thanks.
The builtin fork function creates a copy of your current program in a new background process. The original process and the "child" process will then run at the same time. So you can do something like:
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
my $new_pid = fork();
die "fork failed $!" unless defined $new_pid; # this is uncommon
# Task 1 - in the background
if ($new_pid == 0) {
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
exit; # don't forget this part!
}
# Task 2 - in the foreground
while (-e $file) {
print "...";
sleep 2;
}
Using $file as an indicator of when the first task has finished has some drawbacks. What if the child code has some runtime error? What if the child process gets interrupted? The child process could exit before it gets a chance to delete $file. Then your while loop in the parent process would never end.
The builtin waitpid command can check if a child process is still running, and can handle the case where the child terminates abnormally.
# Task 2
use POSIX ':sys_wait_h';
while (! waitpid $new_pid, &WNOHANG) { # WNOHANG => non-blocking wait
print "...";
sleep 2;
}
Use fork(). I've added extra sleep() calls so you can see that the processes both run and work. In practice, the crontab update will probably run fast enough that the monitor loop doesn't run at all, or only runs once. I used "unless(...)" because it seems to me to make the code clearer; the thing to remember is that fork() returns the pid to the parent, and zero to the child. The process that doesn't see the pid is therefore a subprocess. (As has been pointed out, if the fork fails, the fork will return undef, and the code will be executing in the original process. In our case, that will simply mean that the monitoring starts up after the writing finishes, so the only thing we lose is the monitoring.)
my $file = "/tmp/.$$.crontab.txt";
my $crontab = <<EOS;
# Crontab lines here. Inserted at #{[scalar localtime()]}
EOS
my ($writer_pid, $monitor_pid);
$|++;
# Open file BEFORE launching processes. The monitor depends on the file's
# presence or absence, so if we opened it in the writer process, there'd be a
# chance the monitor process would check before we created it, and exit without
# monitoring.
die "Cannot open temp file\n" unless open(WRITE, ">" . $file);
# Crontab file handle gets passed to the forked process, so we can just use it.
# Altered so we can see the process do its thing.
unless ($writer_pid = fork()) {
print WRITE $crontab."\n";
close WRITE;
print("crontab -l |grep -v backup >> $file");
sleep 20;
print("crontab $file");
sleep 10;
unlink($file);
print "done!\n";
exit;
}
# Either file will exist, or the previous process will
# have completed. If it exists, we monitor. If not,
# we exit immediately.
unless ($monitor_pid = fork()) {
# Child: monitor the writer.
my $waitcount = 1;
while ( -e $file ) {
sleep 2;
print "($waitcount) installing crontab...";
$waitcount++;
}
print "installed\n";
exit;
}
waitpid($monitor_pid, 0);
waitpid($writer_pid,0);
print "both processes done\n";
I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period.
If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file
and calling this Perl file (alarm.pl) in the original Perl File using the system command.
But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File.
Could anybody please let me know the way to terminate a system() call or take the output in that way I used above?
main.pl
my #output = system("alarm.pl");
print"one iperf completed\n";
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
alarm.pl
alarm 30;
my #output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001");
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
In both ways display.txt is always empty.
There are a few separate issues here.
First, to keep the alarm from killing your script, you need to handle the ALRM signal. See the alarm documentation. You shouldn't need two scripts for this.
Second, system doesn't capture output. You need one of the backtick variants or a pipe if you want to do that. There are answers for that on Stackoverflow already.
Third, if alarm.pl puts anything in display.txt, you discard it in main.pl when you re-open the file in write mode. You only need to create the file in one place. When you get rid of the extra script, you won't have this problem.
I recently had some problems with alarm and system, but switching to IPC::System::Simple fixed that.
Good luck, :)
What the hell was I thinking? You don't need a background process for this task. You just need to follow the example in the perldoc -f alarm function and wrap your time-sensitive code in an eval block.
my $command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 30;
#output = `$command`;
alarm 0;
};
if ($#) {
warn "$command timed out.\n";
} else {
print "$command successful. Output was:\n", #output;
}
Inside the eval block, you can capture your output the regular way (with backticks or qx() or readpipe). Though if the call times out, there won't be any output.
If you don't need the output (or don't mind hacking some interprocess communication together), an almost idiot-proof alternative is to set the alarm and run the system call in a child process.
$command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
if (($pid = fork()) == 0) {
# child process
$SIG{ALRM} = sub { die "Timeout\n" }; # handling SIGALRM in child is optional
alarm 30;
my $c = system($command);
alarm 0;
exit $c >> 8; # if you want to capture the exit status
}
# parent
waitpid $pid, 0;
waitpid will return when either the child's system command is finished, or when the child's alarm goes off and kills the child. $? will hold the exit code of the system call, or something else (142 on my system) for an unhandled SIGALRM or 255 if your SIGALRM handler calls die.
I run into a similar problem that requires:
run a system command and get its output
time out the system command after x seconds
kill the system command process and all child processes
After much reading about Perl IPC and manual fork & exec, I came out with this solution. It is implemented as a simulated 'backtick' subroutine.
use Error qw(:try);
$SIG{ALRM} = sub {
my $sig_name = shift;
die "Timeout by signal [$sig_name]\n";
};
# example
my $command = "vmstat 1 1000000";
my $output = backtick(
command => $command,
timeout => 60,
verbose => 0
);
sub backtick {
my %arg = (
command => undef,
timeout => 900,
verbose => 1,
#_,
);
my #output;
defined( my $pid = open( KID, "-|" ) )
or die "Can't fork: $!\n";
if ($pid) {
# parent
# print "parent: child pid [$pid]\n" if $arg{verbose};
try {
alarm( $arg{timeout} );
while (<KID>) {
chomp;
push #output, $_;
}
alarm(0);
}
catch Error with {
my $err = shift;
print $err->{-text} . "\n";
print "Killing child process [$pid] ...\n" if $arg{verbose};
kill -9, $pid;
print "Killed\n" if $arg{verbose};
alarm(0);
}
finally {};
}
else {
# child
# set the child process to be a group leader, so that
# kill -9 will kill it and all its descendents
setpgrp( 0, 0 );
# print "child: pid [$pid]\n" if $arg{verbose};
exec $arg{command};
exit;
}
wantarray ? #output : join( "\n", #output );
}
Might use "timeout -n " for wrapping your commands if thats already common on your system.