Currently I am looking to daemonize a perl script. Sadly most answers are out of date and I actually quite do not understand how to begin the daemon process (especially daemon perl scripts).
Right now I am looking at Proc Daemon but again I do not know where to begin or whether it should be done with or without the use of a modules.
I believe if I give an example of what I am look for to give this question a little more direction.
Example
Say I am on osx and I want to write a perl script that can run as a daemon. It responds to the signal HUP which then proceeds to print the contents from a file from a certain directory.If it recieves signal USR1 it prints out the content differently. What is the most appropriate way to do this as a daemon?
This is all you need:
#!/usr/bin/perl
use strict;
use warnings;
use Daemon::Daemonize qw( daemonize write_pidfile );
sub sighup_handler {
...
}
sub sigusr1_handler {
...
}
{
my $name = "...";
my $error_log_qfn = "/var/log/$name.log";
my $pid_file_qfn = "/var/run/$name.pid";
daemonize(
close => 'std',
stderr => $error_log_qfn,
);
$SIG{HUP} = \&sighup_handler;
$SIG{USR1} = \&sigusr1_handler;
write_pidfile($pid_file_qfn);
sleep while 1;
}
Related
I am starting multiple bash scripts from a Perl script and I want to monitor them and log their behavior.
I know that I can tell whether a process is still running with kill 0, $pid and I can get the exit code from $?, but with launching multiple scripts in the background I can't relate values of $? to the processes that gave it as an exit code.
How can I launch those scripts in parallel, but get the exit code from each them? I need something like proc_get_status from PHP.
Sorry for not providing the code from the beginning.
I stripped down the code, so the important things are to see.
use warnings;
use strict;
use IPC::Open3;
use IO::Handle;
my $timeLimit = 60*60; # some time limit not to be crossed
my $startTime = time();
my #commands = (); # fill up with commands to be executed
my #processes = ();
foreach my $cmd (#commands) {
my $stdout = IO::Handle->new;
my $stderr = IO::Handle->new;
my $pid = open3(undef, $stdout, $stderr, $cmd);
push #processes, {"pid" => $pid, "out" => $stdout, "err" => $stderr, "cmd" => $fullcmd};
}
do {
if (time() - $startTime > $timeLimit) {
kill 2, $_->{pid} foreach (#processes);
#processes = ();
last;
} else {
for (my $i = 0; $i < #processes; $i++) {
unless (kill 0, $processes[$i]) {
# if it's not running, I would like to check the exit code and log it from here on.
# also remove it from the array, thats why I used for and not foreach, so I can use splice later.
}
}
}
} while (#processes > 0);
You have already hit upon the insight of storing background job data in mini-objects. Take the next step and try a full-featured parallelization package like Forks::Super. You can create background process objects that you can then query for their status and exit code. Forks::Super supports process timeouts and an open3-like interface.
use Forks::Super;
$Forks::Super::MAX_PROC = 10; # optional, block while 10 jobs already running
...
foreach my $cmd (#commands) {
my $job = fork {
cmd => $cmd, # run $cmd in background process
child_fh => 'out,err', # child STDOUT,STDERR available to parent
timeout => $timeLimit # kill the job after $timeLimit seconds
};
push #processes, $job;
}
while (#processes) {
sleep 5;
foreach my $job (#processes) {
if ($job->is_complete) {
$job->wait;
my $exit_code = $job->status;
my $output = $job->read_stdout;
my $error = $job->read_stderr;
# ... log status, output, error, $job->{cmd}, etc. ...
$job->dispose; # close filehandles and other clean up
}
}
#processes = grep { !$_->is_reaped } #processes;
}
You can use wait and waitpid to get the status of individual children. The perlipc documentation gives a few examples in the section on "Signals".
Since you're using IPC::Open3, the Synopsis also has an example of using waitpid():
my($wtr, $rdr, $err);
use Symbol 'gensym'; $err = gensym;
$pid = open3($wtr, $rdr, $err,
'some cmd and args', 'optarg', ...);
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
First, take a look at Perl's fork() function. This would be the typical way I do things like this. There's a good explanation with examples here.
An easy to use forking module is provided by Parallel::ForkManger.
There is also Perl's interpreter-base threads which is a bit lower-level, harder to use, and spawns threads rather than forking processes.
Another possible way is with GNU Parallel. parallel is a very powerful tool to run commands in parallel. You can easily run and manage multiple commands and scripts with it. It has a ---joblog option which might be helpful for you.
All of these approaches provide ways to get the exit code of the sub-processes. In the end, the best choice depends on your current implementation which you did not provide.
In Perl, the command, will wait till the "command" is completed. Is there a way to let command wait only for 20 sec ? One scenario is like the following:
The command is an infinite loop and won't finish. The command will freeze and the program can't proceed. What I want to let the program not blocked by command.
I know Ruby has a way to do this. Does Perl have a solution?
Thanks,
=Y
Use alarm:
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm 20;
system("<Your command>")
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
#!/usr/bin/perl -w
use strict;
use 5.010;
my $foo = 123;
my $pidChild = fork(); # All objects before this fork statement will be copied.
given ($pidChild)
{
when (!defined($_)) {
die "Cannot fork: $!";
}
when ($_ == 0) {
# The child process goes here.
$foo = 456; # This is a duplicate.
system 'subprocess options'; # Or: exec 'suprocess options';
}
default {
# The original process goes here.
waitpid($_, 0); # Whether to wait or not is up to you.
say $foo; # Output: 123
}
}
If Inter-Process Communication (IPC) is needed, before the invocation of fork, the built-in function pipe can be used to create 2 handlers, one for input and another for output, they'll be shared by the original process and the subprocess.
There's surely more than one way to do IPC. The built-in function open, the subroutine open2 offered by the module IPC::Open2, and the open3 offered by IPC::Open3 all can run a subprocess asynchronously.
I wrote a quite large program that executes commands on many remote hosts, but I ran into a serious problem, and I don't know how to fix it.
After a lot of trying around, I was able to extract the minimum code to reproduce the problem reliably on my machine:
use warnings;
use strict;
use threads;
use threads::shared;
use Data::Dumper;
use POSIX ":sys_wait_h";
use Net::OpenSSH;
use Time::HiRes qw( usleep );
my #LIST=qw(host038b host039a host039b host040a host040b host041a host041b host043a
host043b host044a host044b host045a host045b host046a host046b host047a host047b host049a
host049b host050a host050b host054a host054b host055a host055b host056a host056b host057a
host057b host058a host059a host059b host060a host060b host062a host062b host063a host068a
host068b host069a host069b host071a host071b host072a host073a host073b host075a host075b
host078a host078b host082a host082b host087a host087b host089a host089b host090a host090b
host091a host091b host092a host092b host096a host096b host097a host097b host098a host099a
host099b host100a);
my ($SSH, $CPID, %PIDS, #DONE);
sub _testthread {
# Read stdout pipe
my $SCROUT=shift;
while (<$SCROUT>) {
print $_; # I normally write that to a logfile
}
return (0);
}
foreach (#LIST) {
$SSH->{$_}=Net::OpenSSH->new($_, async => 1,
master_opts => [ -o => "PasswordAuthentication=no"]);
}
$SIG{CHLD} = sub { my $WPID;
push (#DONE, { 'PID' => $WPID, 'RC' => $?, 'ERR' => $!}) while (($WPID = waitpid(-1, WNOHANG)) > 0) };
foreach (#LIST) {
my ($SCRFH, $SCROUT, undef, $CPID) = $SSH->{$_}->open_ex({stdin_pipe => 1,
stdout_pipe => 1}, '/bin/bash -s');
$PIDS{$CPID}='ACTIVE';
threads->new('_testthread', $SCROUT);
print $SCRFH "sleep 2\n";
print $SCRFH "echo test `hostname`\n";
print $SCRFH "exit 0\n";
close $SCRFH;
usleep 10000;
}
while (grep(/^ACTIVE/, values(%PIDS)) > 0) {
print Dumper \%PIDS;
while (#DONE) {
my $DONE = shift (#DONE);
$PIDS{$DONE->{PID}}='DONE';
}
sleep 1;
}
$_->join foreach (threads->list);
With the preinstalled perl 5.10, this segfaults most of the time, even when taking out some more complicated constructs of redericting the open_ex output to a file descriptor.
With a newly compiled perl 5.18.2, this script hangs indefinitely most of the time, because it does not seem to receive every SIG{CHLD}, even though I am using safe signalling (as far as I understand).
To reproduce the problem, the following things seem to be necessary:
A sufficient number of hosts in #LIST
letting open_ex (or derived methods of Net::OpenSSH) fork
giving the STDOUT file handle of that fork to a thread
using a signal handler for SIG{CHLD}
As my bigger program which uses this structure is mostly unusable, I would be very happy if somebody could help me find a solution, maybe an alternative.
Thanks, and greetings,
Mazze
You are trying to mix signals and threads, and that's always going to be a bad idea. The solution is simply to stop using threads; whatever you're doing can be done nicer a different way.
Perhaps consider some kind of asynchronous/event-driven IO system, to perform these concurrent IO tasks with.
I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.
I am trying to make an interactive shell script in Perl.
The only user input I can find is the following:
$name = <STDIN>;
print STDOUT "Hello $name\n";
But in this the user must always press enter for the changes to take effect.
How can I get the program to proceed immediately after a button has been pressed?
From perlfaq8's answer to How do I read just one key without waiting for a return key?
:
Controlling input buffering is a remarkably system-dependent matter. On many systems, you can just use the stty command as shown in getc in perlfunc, but as you see, that's already getting you into portability snags.
open(TTY, "+</dev/tty") or die "no tty: $!";
system "stty cbreak </dev/tty >/dev/tty 2>&1";
$key = getc(TTY); # perhaps this works
# OR ELSE
sysread(TTY, $key, 1); # probably this does
system "stty -cbreak </dev/tty >/dev/tty 2>&1";
The Term::ReadKey module from CPAN offers an easy-to-use interface that should be more efficient than shelling out to stty for each key. It even includes limited support for Windows.
use Term::ReadKey;
ReadMode('cbreak');
$key = ReadKey(0);
ReadMode('normal');
However, using the code requires that you have a working C compiler and can use it to build and install a CPAN module. Here's a solution using the standard POSIX module, which is already on your system (assuming your system supports POSIX).
use HotKey;
$key = readkey();
And here's the HotKey module, which hides the somewhat mystifying calls to manipulate the POSIX termios structures.
# HotKey.pm
package HotKey;
#ISA = qw(Exporter);
#EXPORT = qw(cbreak cooked readkey);
use strict;
use POSIX qw(:termios_h);
my ($term, $oterm, $echo, $noecho, $fd_stdin);
$fd_stdin = fileno(STDIN);
$term = POSIX::Termios->new();
$term->getattr($fd_stdin);
$oterm = $term->getlflag();
$echo = ECHO | ECHOK | ICANON;
$noecho = $oterm & ~$echo;
sub cbreak {
$term->setlflag($noecho); # ok, so i don't want echo either
$term->setcc(VTIME, 1);
$term->setattr($fd_stdin, TCSANOW);
}
sub cooked {
$term->setlflag($oterm);
$term->setcc(VTIME, 0);
$term->setattr($fd_stdin, TCSANOW);
}
sub readkey {
my $key = '';
cbreak();
sysread(STDIN, $key, 1);
cooked();
return $key;
}
END { cooked() }
1;
You can use the Term::ReadKey module to check for a keypress.