How do I manage multiple subprocesses in Perl? - perl

I have a Perl program that needs to run about half a dozen programs at the same time in the background and wait for them all to finish before continuing. It is also very important that the exit status of each can be captured.
Is there a common idiom for doing this in Perl? I'm currently thinking of using threads.

Don't use threads. Threads suck. The proper way is to fork multiple processes and wait for them to finish. If you use wait or waitpid, the exit status of the process in question will be available in $?.
See the perldocs for fork, wait, and waitpid, and also the examples in this SO thread.
If all you need is to just manage a pool of subprocesses that doesn't exceed a certain size, check out the excellent Parallel::ForkManager.

Normally you would fork + exec (on unix based systems, this is traditional)
The fork call will duplicate the current process, and if you needed to you could then call exec in one of the children to do something different. It sounds like you just want to fork and call a different method/function in your child process.
If you want something more complex, check cpan for POE - that lets you manage all sorts of complex scenarios.
Useful links:
"spawning multiple child processes" on PerlMonks
Google "perl cookbook forking server" too - only allowed to post one link unless I log in.

Related

Can I control process priorities through perl

I am wondering if it is possible to control the priorities through perl.
Basically I want my perl script to keep running in my box if some process take up the cpu. This perl script either reduce the priority or if process is too much CPU taking, perl script can kill that too.
I hate to be operating System specific, But I am trying to design this for Windows system.
You can use getpriority and setpriority to handle priorities in Perl.
From POSIX::nice():
This is similar to the C function nice() , for changing the scheduling preference of the current process. Positive arguments mean more polite process, negative values more needy process. Normal user processes can only be more polite. Returns undef on failure.

Perl, waiting for non-child process to exit

I have a script which is used to redeploy a couple programs in a custom server environment, (ie: not an established standard container which has code hotswapping). To do this, it takes down the server processes, but these take some time to fully close all their connections. These aren't child processes of the perlscript. They run for hundreds of days at a time normally, so I'd rather not have to wrap the server processes in perlscripts just so I can fork them to shut them down elegantly months or years later.
So currently to wait on them to die during redeployment, I'm parsing the output of ps -ef, grabbing the pid field, killing that pid, waiting 60 seconds, (which seems a reasonable time with these processes), rechecking the ps -ef to make sure they're dead, etc. Go on with copies, chmods, etc.
This solution feels lame/clunky to me. I've google'd all over and have not seen anything on this particular topic; there's a pile of material about waiting on forked children, and waitpid would be perfect if only it operated in this way.
From reading How to wait for exit of non-children processes (which is c specific)I'm guessing there's really not much else I can do, apart from reading /proc/pid instead, but I thought maybe there'd be a perl-specific solution out there somewhere. Any ideas?
You can use kill 0, $pid (returns 1 on success and 0 on failure) instead of rechecking ps -ef, but that has the possible gotcha that the pid may have been reused.
If you already have ps-parsing code, it's probably not worth it to switch, but there's Proc::ProcessTable.
Other than that, no ideas.
In Unix \ Linux only the parent process gets a signal when a process exits parent process - This is an OS feature, and not language specific.
Other solutions will be equivalent to yours - checking the process table for the existence of the process (although the specific method may vary - like using ps or directly querying the kernel)

How can I run my program code after fixed intervals?

I have this Perl script for monitoring a folder in Linux.
To continuously check for any updates to the directory, I have a while loop that sleeps for 5 minutes in-between successive loops :
while(1) {
...
sleep 300;
}
Nobody on my other question suggested using cron for scheduling instead of a for loop.
This while construct, without any break looks ugly to me as compared to submitting a cronjob using crontab :
0 */5 * * * ./myscript > /dev/null 2>&1
Is cron the right choice? Are there any advantages of using the while loop construct?
Are there any better ways of doing this except the loop and cron?
Also, I'm using a 2.6.9 kernel build.
The only reasons I have ever used the while solution is if either I needed my code to be run more than once a minute or if it needed to respond immediately to an external event, neither of which appear to be the case here.
My thinking is usually along the lines of: cron has been tested by millions and millions of people over decades so it's at least as reliable as the code I've just strung together.
Even in situations where I've used while, I've still had a cron job to restart my script in case of failure.
My advice would be to simply use cron. That's what it's designed for. And, as an aside, I rarely redirect the output to /dev/null, that makes it too hard to debug. Usually I simply redirect to a file in the /tmp file system so that I can see what's going on.
You can append as long as you have an automated clean-up procedure and you can even write to a more private location if you're worried about anyone seeing stuff in the output.
The bottom line, though, is that a rare failure can't be analysed if you're throwing away the output. If you consider your job to be bug-free then, by all means, throw the output away but I rarely consider my scripts bug-free, just in case.
Why don't you make the build process that puts the build into the directory do the notification? (See SO 3691739 for where that comes from!)
Having cron run the program is perfectly acceptable - and simpler than a permanent loop with a sleep, though not by much.
Against a cron solution, since the process is a simple one-shot, you can't tell what has changed since the last time it was run - there is no state. (Or, more accurately, if you provide state - via a file, probably - you are making life much more complex than running a single script that keeps its state internally.)
Also, stopping the notification service is less obvious. If there's a single process hanging around, you kill it and the notifications stop. If the notifications are run by cron, then you have to know that they're run out of a crontab, know whose crontab it is, and edit that entry in order to stop it.
You should also consider persuading your company to upgrade to a version of Linux where the inotify mechanism is available.
If you go for the loop instead of cron and want your job run at regular intervals, sleep(300) tends to drift. (consider the execution time of the rest of your script)
I suggest using a construct like this:
use constant DELAY => 300;
my $next=time();
while (1){
$next+=DELAY;
...;
sleep ($next-time());
};
Yet another alternative is the 'anacron' utility.
if you don't want to use cron.
this http://upstart.ubuntu.com/ can be used to babysit processes.
or you can use watch whichever is easier.

How do I get the PID of the process I start with Perl's system()?

I'm writing a Perl script that runs 4 simultaneous, identical processes with different input parameters (see background here - the rest of my question will make much more sense after reading that).
I am making a system() call to a program that generates data (XFOIL, again see above link). My single-core version of this program looks like this:
eval{
local $SIG{ALRM} = sub{die "TIMEOUT"};
alarm 250;
system("xfoil <command_list >xfoil_output");
alarm 0;
};
if ($#){
# read the output log and run timeout stuff...
system('killall xfoil') # Kill the hung XFOIL. now it's a zombie.
}
Essentially, XFOIL should take only about 100 seconds to run - so after 250 seconds the program is hanging (presumably waiting for user input that it's never going to get).
The problem now is, if I do a killall in the multi-core version of my program, I'm going to kill 3 other instances of XFOIL, and those processes are generating data. So I need to kill only the hung instance, and this requires getting a PID.
I don't know very much about forks and such. From what I can tell so far, I would run an exec('xfoil') inside the child process that I fork. But the PID of the exec() will be different than the PID of the child process (or is it? It's a separate process so I'd assume it is, but again I've no experience with this..), so this still doesn't help when I want to forcefully kill the process since I won't have the PID anyway. How do I go about doing this?
Thanks a ton for your help!
If you want the PID, fork the process yourself instead of using system. The system command is mostly designed as a "fire and forget" tool. If you want to interact with the process, use something else. See, for instance, the perlipc documentation.
I think you've already looked at Parallel::ForkManager based on answers to your question How can I make my Perl script use multiple cores for child processes?

In Perl, how can I do a non-blocking system call?

In Perl, without using the Thread library, what is the simplest way to spawn off a system call so that it is non-blocking? Can you do this while avoiding fork() as well?
EDIT
Clarification. I want to avoid an explicit and messy call to fork.
Do you mean like this?
system('my_command_which_will_not_block &');
As Chris Kloberdanz points out, this will call fork() implicitly -- there's really no other way for perl to do it; especially if you want the perl interpreter to continue running while the command executes.
The & character in the command is a shell meta-character -- perl sees this and passes the argument to system() to the shell (usually bash) for execution, rather than running it directly with an execv() call. & tells bash to fork again, run the command in the background, and exit immediately, returning control to perl while the command continues to execute.
The post above says "there's no other way for perl to do it", which is not true.
Since you mentioned file deletion, take a look at IO::AIO. This performs the system calls in another thread (POSIX thread, not Perl pseudothread); you schedule the request with aio_rmtree and when that's done, the module will call a function in your program. In the mean time, your program can do anything else it wants to.
Doing things in another POSIX thread is actually a generally useful technique. (A special hacked version of) Coro uses it to preempt coroutines (time slicing), and EV::Loop::Async uses it to deliver event notifications even when Perl is doing something other than waiting for events.