In Perl, without using the Thread library, what is the simplest way to spawn off a system call so that it is non-blocking? Can you do this while avoiding fork() as well?
EDIT
Clarification. I want to avoid an explicit and messy call to fork.
Do you mean like this?
system('my_command_which_will_not_block &');
As Chris Kloberdanz points out, this will call fork() implicitly -- there's really no other way for perl to do it; especially if you want the perl interpreter to continue running while the command executes.
The & character in the command is a shell meta-character -- perl sees this and passes the argument to system() to the shell (usually bash) for execution, rather than running it directly with an execv() call. & tells bash to fork again, run the command in the background, and exit immediately, returning control to perl while the command continues to execute.
The post above says "there's no other way for perl to do it", which is not true.
Since you mentioned file deletion, take a look at IO::AIO. This performs the system calls in another thread (POSIX thread, not Perl pseudothread); you schedule the request with aio_rmtree and when that's done, the module will call a function in your program. In the mean time, your program can do anything else it wants to.
Doing things in another POSIX thread is actually a generally useful technique. (A special hacked version of) Coro uses it to preempt coroutines (time slicing), and EV::Loop::Async uses it to deliver event notifications even when Perl is doing something other than waiting for events.
Related
How can I have one perl script call another perl script and get the return results?
I have perl Script B, which does a lot of database work, prints out nothing, and simply exits with a 0 or a 3.
So I would like perl Script A call Script B and get its results. But when I call:
my $result = system("perl importOrig.pl filename=$filename");
or
my $result = system("/usr/bin/perl /var/www/cgi-bin/importOrig.pl filename=$filename");
I get back a -1, and Script B is never called.
I have debugged Script B, and when called manually there are no glitches.
So obviously I am making an error in my call above, and not sure what it is.
There are many things to consider.
Zeroth, there's the perlipc docs for InterProcess Communication. What's the value in the error variable $!?
First, use $^X, which is the path to the perl you are executing. Since subprocesses inherit your environment, you want to use the same perl so it doesn't confuse itself with PERL5LIB and so on.
system("$^X /var/www/cgi-bin/importOrig.pl filename=$filename")
Second, CGI programs tend to expect particular environment variables to be set, such as REQUEST_METHOD. Calling them as normal command-line programs often leaves out those things. Try running the program from the command line to see how it complains. Check that it gets the environment it wants. You might also check the permissions of the program to see if you (or whatever user runs the calling program) are allowed to read it (or its directory, etc). You say there are no glitches, so maybe that's not your particular problem. But, do the two environments match in all the ways they should?
Third, consider making the second program a modulino. You could run it normally as a script from the command line, but you could also load it as a Perl library and use its features directly. This obviates all the IPC stuff. You could even fork so that stuff runs concurrently.
The task is to close the stdout handle a while before the process exits. With WinAPI functions, it'd be this:
CloseHandle(GetStdHandle(STD_OUTPUT_HANDLE))
I know I can do DllImport with Add-Type but I believe there must be a nicer way.
What's the simplest way to accomplish the same with PowerShell?
The wider task is to test a piece of a Python library that starts and interacts flexibly with local (with help of subprocess and _winapi modules) or remote (via WinRM) processes on Windows. One of the tests is to run a program that closes its ends of stdout and stderr pipes a while before it exits. (There was a similar issue on Linux.) Therefore, a script must closes stdout and stderr so that the calling code is signalled by the OS that they're closed. The only way I found is to call CloseHandle on stdout and stderr handles. Calling .Close or .Dispose on the stream objects doesn't help: they seem to be closed only internally to the called process. The script should be in some "native" language that needs no additional compilers and interpreters. Therefore, it's either cmd, VBScript or PowerShell. Only the last one is able to call WinAPI functions. (At the moment of this update I already wrote scripts both on Python, which works perfectly but needs an interpreter to be installed, and Powershell, which works without any additional installations but a bit cumbersome and very slow.)
I have a perl script which calls another perl script using backticks. I want to instead call this script and have it daemonize. How do I go about doing this?
edit:
I dont care to communicate back with the process/daemon. I'll most likely just stick it in an sqlite3 table or something.
You refer to backticks, thus I suppose that you want to communicate with the daemon after it's started? Since daemons does not use STDOUT, you will have to think of some other way of passing information to and from it.
The Perl interprocess communication man page (perlipc) has several good examples of this, especially the section "Complete dissociation of child from parent".
The Proc::Daemon contains convenient functions for daemonizing a process.
I have a Perl program that needs to run about half a dozen programs at the same time in the background and wait for them all to finish before continuing. It is also very important that the exit status of each can be captured.
Is there a common idiom for doing this in Perl? I'm currently thinking of using threads.
Don't use threads. Threads suck. The proper way is to fork multiple processes and wait for them to finish. If you use wait or waitpid, the exit status of the process in question will be available in $?.
See the perldocs for fork, wait, and waitpid, and also the examples in this SO thread.
If all you need is to just manage a pool of subprocesses that doesn't exceed a certain size, check out the excellent Parallel::ForkManager.
Normally you would fork + exec (on unix based systems, this is traditional)
The fork call will duplicate the current process, and if you needed to you could then call exec in one of the children to do something different. It sounds like you just want to fork and call a different method/function in your child process.
If you want something more complex, check cpan for POE - that lets you manage all sorts of complex scenarios.
Useful links:
"spawning multiple child processes" on PerlMonks
Google "perl cookbook forking server" too - only allowed to post one link unless I log in.
I'm writing a Perl script that runs 4 simultaneous, identical processes with different input parameters (see background here - the rest of my question will make much more sense after reading that).
I am making a system() call to a program that generates data (XFOIL, again see above link). My single-core version of this program looks like this:
eval{
local $SIG{ALRM} = sub{die "TIMEOUT"};
alarm 250;
system("xfoil <command_list >xfoil_output");
alarm 0;
};
if ($#){
# read the output log and run timeout stuff...
system('killall xfoil') # Kill the hung XFOIL. now it's a zombie.
}
Essentially, XFOIL should take only about 100 seconds to run - so after 250 seconds the program is hanging (presumably waiting for user input that it's never going to get).
The problem now is, if I do a killall in the multi-core version of my program, I'm going to kill 3 other instances of XFOIL, and those processes are generating data. So I need to kill only the hung instance, and this requires getting a PID.
I don't know very much about forks and such. From what I can tell so far, I would run an exec('xfoil') inside the child process that I fork. But the PID of the exec() will be different than the PID of the child process (or is it? It's a separate process so I'd assume it is, but again I've no experience with this..), so this still doesn't help when I want to forcefully kill the process since I won't have the PID anyway. How do I go about doing this?
Thanks a ton for your help!
If you want the PID, fork the process yourself instead of using system. The system command is mostly designed as a "fire and forget" tool. If you want to interact with the process, use something else. See, for instance, the perlipc documentation.
I think you've already looked at Parallel::ForkManager based on answers to your question How can I make my Perl script use multiple cores for child processes?