there. I'm trying to do the following:
Fork.
Launched a desktop application.
Wait for 2 secs.
Kill the desktop application.
This is what I have:
#!/usr/bin/perl
use utf8;
use strict;
use warnings;
my #cmd = ('calc.exe');
my $pid = fork();
die "fork() failed: $!" unless defined $pid;
if ($pid) {
system #cmd;
}
sleep 2;
kill 'TERM' => $pid;
The application is launched correctly but it doesn't kill after the two seconds. I know I missing something, I hope someone can point me to the right direction. Right know I'm testing this code in windows 7 SP1 with perl 5.32.1 bundled with msys2.
You have up to four processes here: Parent, child, shell, app. You're killing the child, not the app. Use exec instead of system and use the exec BLOCK LIST form to avoid a shell.
Even then, that may not work. It depends on how well msys2 emulates fork and exec. A better solution might be to use the Windows system calls (CreateProcess and ExitProcess) instead of emulations of unix system calls.
Related
I have to kill a program that I am opening via
$pid = open(FH, "program|")
or
$pid = or open(FH, "-|", "program")
However, the program (mosquittto_sub, to be specific) still lingers around in the background, because open is returning the PID of the sh that perl is using to run the program, so I am only killing the sh wrapper instead of the actual program.
Is there a way to get the programs real PID? What is the point of getting the sh's PID?
There are a few ways to deal with this.
First, you can use a list form to open a process and then no shell is involved so the child process (with pid returned by open) is precisely the one with the program you need to stop
my #cmd = ('progname', '-arg1', ...);
my $pid = open my $fh, '-|', #cmd // die "Can't open \"#cmd\": $!";
...
my $num_signaled = kill 15, $pid;
This sketch needs some checks added. Please see the linked documentation (look for "pipe").
If this isn't suitable for some reason -- perhaps you need the shell to run that program -- then you can find the program's pid, and Proc::ProcessTable module is good for this. A basic demo
use Proc::ProcessTable;
my $prog_name = ...
my $pid;
my $pt = Proc::ProcessTable->new();
foreach my $proc (#{$pt->table}) {
if ($proc->cmndline =~ /\Q$prog_name/) { # is this enough to identify it?
$pid = $proc->pid;
last;
}
}
my $num_signaled = kill 15, $pid;
Please be careful with identifying the program by its name -- on a modern system there may be all kinds of processes running that contain the name of the program you want to terminate. For more detail and discussion please see this post and this post, for starters.
Finally, you can use a module to run your external programs and then you'll be able to manage and control them far more nicely. Here I'd recommend IPC::Run.
I am working on a capstone project and am hoping for some insight.
This is the first time I've worked with Perl and it's pretty much a basic Perl script to automate a few different Unix commands that need to be executed in a specific order. There are two lines throughout the script which executes a Unix command that needs to finish processing before it is acceptable for the rest of the script to run (data will be incorrect otherwise).
How am I able to use Perl (or maybe this is a Unix question?) to print a simple string once the Unix command has finished processing? I am looking into ways to read in the Unix command name but am not sure how to implement a way to check if the process is no longer running and to print a string such as "X command has finished processing" upon it's completion.
Example:
system("nohup scripts_pl/RunAll.pl &");
This runs a command in the background that takes time to process. I am asking how I can use Perl (or Unix?) to print a string once the process has finished.
I'm sorry if I didn't understand your asking context.
But couldn't you use perl process fork function instead of & if you would like to do parallel process?
# parent process
if (my $pid = fork) {
# this block behaves as a normal process
system("nohup scripts_pl/RunAll2.pl"); # you can call other system (like RunAll2.pl)
wait; # wait for the background processing
say 'finished both';
}
# child process
else {
# this block behaves as a background process
system("nohup scripts_pl/RunAll.pl"); # trim &
}
You could try to use IPC::Open3 instead of system:
use IPC::Open3;
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
waitpid( $pid, 0 );
Or, if you need to run nohup through the shell:
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'bash','-c', 'nohup scripts_pl/RunAll.pl & wait');
Update: Thanks to #ikegami. A better approach if you would like STDIN to stay open after running the command:
open(local *CHILD_STDIN, "<&", '/dev/null') or die $!;
my $pid = open3("<&CHILD_STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
I'm writing a perl program for windows, that runs several SVN commands.
I need to receive the status of the SVN process so i'm using "back ticks".
e.g:
{
$COMMAND="blabla...";
$results=`$COMMAND 2>&1`;
parse_results($results);
}
Sometimes the process gets stuck, so I need to set timeout to the process.
I tried to use "ALARM" signal but it didn't kill the stuck process. I receive the indication only if and when the process finishes.
What can I do to deal with processes that don't complete fast enough?
Signals are a unix concept. Instead, you should use IPC::Run.
use IPC::Run qw( run timeout );
run [ 'bla', 'arg' ], '>&', \my $results, timeout( 10 )
or die "bla: $?"
I'm trying to grasp the concept of fork() & exec() for my own learning purposes. I'm trying to use perl fork as a second identical process, and then use that to exec a .sh script.
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? Perl script doesn't wait on the child process and continues on its execution. So my perl script doesn't care about the output of the child process, but only that the command is valid and running. Sort of like calling the script to run in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? [...] Sort of like calling the script to run in the background.
Yes. Fork & exec is actually the way shells run commands in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
Yes, using waitpid() and looking at the return value stored in $?
Like #rohanpm mentioned, the perlipc man page has a lot of useful examples showing how to do this. Here is one of the most relevant, where a signal handler is set up for SIGCHLD (which will be sent to the parent when the child terminates)
use POSIX ":sys_wait_h";
$SIG{CHLD} = sub {
while ((my $child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
};
To get waitpid to not wait for the child:
use POSIX qw/ WNOHANG /;
my $st = waitpid $pid, WNOHANG;
$st is 0 if the process is still running and the pid if it's reaped.
I need to run Perl script by cron periodically (~every 3-5 minutes). I want to ensure that only one Perl script instance will be running in a time, so next cycle won't start until the previous one is finished. Could/Should that be achieved by some built-in functionality of cron, Perl or I need to handle it at script level?
I am quite new to Perl and cron, so help and general recommendations are appreciated.
I have always had good luck using File::NFSLock to get an exclusive lock on the script itself.
use Fcntl qw(LOCK_EX LOCK_NB);
use File::NFSLock;
# Try to get an exclusive lock on myself.
my $lock = File::NFSLock->new($0, LOCK_EX|LOCK_NB);
die "$0 is already running!\n" unless $lock;
This is sort of the same as the other lock file suggestions, except I don't have to do anything except attempt to get the lock.
The Sys::RunAlone module does what you want very nicely. Just add
use Sys::RunAlone;
near the top of your code.
Use File::Pid to store the script's pid in a file, which the script should check for at the start, and abort if found. You can remove the pidfile when the script is done, but it's not truly necessary, as you can simply check later to see if that process id is still alive (which will also account for the cases when your script aborts unexpectedly):
use strict;
use warnings;
use File::Pid;
my $pidfile = File::Pid->new({file => /var/run/myscript});
exit if $pidfile->running();
$pidfile->write();
# ... rest of script...
# end of script
$pidfile->remove();
exit;
A typical approach is for each process to open and lock a certain file. Then the process reads the process ID contained in the file.
If a process with that ID is running, the latecomer exits quietly. Otherwise, the new winner writes its process ID ($$ in Perl) to the pidfile, closes the handle (which releases the lock), and goes about its business.
Example implementation below:
#! /usr/bin/perl
use warnings;
use strict;
use Fcntl qw/ :DEFAULT :flock :seek /;
my $PIDFILE = "/tmp/my-program.pid";
sub take_lock {
sysopen my $fh, $PIDFILE, O_RDWR | O_CREAT or die "$0: open $PIDFILE: $!";
flock $fh => LOCK_EX or die "$0: flock $PIDFILE: $!";
my $pid = <$fh>;
if (defined $pid) {
chomp $pid;
if (kill 0 => $pid) {
close $fh;
exit 1;
}
}
else {
die "$0: readline $PIDFILE: $!" if $!;
}
sysseek $fh, 0, SEEK_SET or die "$0: sysseek $PIDFILE: $!";
truncate $fh, 0 or die "$0: truncate $PIDFILE: $!";
print $fh "$$\n" or die "$0: print $PIDFILE: $!";
close $fh or die "$0: close: $!";
}
take_lock;
print "$0: [$$] running...\n";
sleep 2;
I have always used this - small and simple - no dependancy on any module and works both Windows + Linux.
use Fcntl ':flock';
### Check to make sure there is only one instance ###
open SELF, "< $0" or die("Cannot run two instances of this program");
unless ( flock SELF, LOCK_EX | LOCK_NB ) {
print "You cannot run two instances of this program , a process is still running";
exit 1;
}
AFAIK perl has no such thing builtin. You could easily create a temporary file, when you start your application and delete it, when your script is done.
Given the frequency I would normally write a daemon (server) that nicely waits idly between job runs (i.e. sleep()) rather than try to use cron for fairly fine-grained access.
If necessary, on Unix / Linux systems you could run it from /etc/inittab (or replacement) to ensure that it always running, and is automatically restarted in the process is killed or dies.
Added: (and some irrelevant stuff removed)
The always present (running, but mostly idle) daemon approach has the benefit of eliminating the possibility of concurrent instances of the script being being started by cron automatically.
However it does mean you are responsible for managing the timing correctly, such as in the case of there is an overlap (i.e. a previous run is still running, while a new trigger occurs). This may help you decide whether to use a forking daemon or non-forking design. Threads don't provide any advantage in this scenario, so there is no need to consider their usage.
This does not completely eliminate the possibility of multiple processes running, but that a common problem with many daemons. The typical solution is to use a semaphore such as a mutually-exclusive lock on a file, to prevent a second instance from being run. The file-lock is automatically forgotten when the process ends, so in the case of abnormal termination (e.g. power failure) there is no clean-up necessary of the lock itself.
An approach using Fcntl module, and using a Perl sysopen with a O_EXCL flag (or O_RDWR | O_CREAT | O_EXCL) was given by Greg Bacon. The only differences I would make are combine exclusive locking into the sysopen call (i.e. use the flags I've suggested), and remove the then redundant flock call. Oh, and I would follow the UNIX (& Linux FHS) file-system and naming conventions of /var/run/daemonname.pid.
Another approach would be to use djb's daemontools or similar to "daemonize" the task.