Remove Perl module from child stack - perl

I have a daemon which loads DBI (DBD::mysql) and then forks child processes. I'd like to prevent the DBI module from being in memory in the forked child processes.
So something like this:
#!/usr/bin/perl
use DBI;
my $dbh = DBI->connect(db_info);
my $pid = fork();
if($pid){
# The forked process here should not have DBI loaded
}
Thanks for the help!

Loading a module is to execute it like a script. There's absolutely no difference between a module and a script to Perl. To unload a module, one would need to undo the effects of running it. That can't be done mechanically, and it's not feasible to do manually.
The simplest solution would to be to have the child exec something. It could even be the script you are already running.
exec($^X, $0, '--child', #args)
The child can be given access to the socket by binding it to the child's fd 0 (stdin) and fd 1 (stdout).

You can't do that easily unless you put the load after the fork. But to do that you have to not use use. Do this instead:
my $pid = fork();
if ($pid) {
# child
} else {
require DBI;
import DBI;
}
That should prevent the DBI module from loading until after the fork. The use routine essentially does a require/import but inside a BEGIN {} block which is why you have to not use it.

If you are running a modern Linux system, then forks are COW (copy on write). This means pages from the parent are only copied to the child's address space if they are modified by the parent or the child. So, the DBI module is not in the memory of the forked child processes.
Perl 5 does not have any way of unloading modules from memory. If you really need the children to have different code than the parent for some reason, you are better off separating that code out of the main code as its own script and then using exec after the fork to run the child script. This will be slower than normal forking since it has to compile the child code, so if you fork a lot, it might be better to have two scripts that talk to each other over sockets and have the "child" script pre-fork.

Knowing now what you want to do with this, since there isn't a good way to unload modules i Perl, a good solution to the problem as to write an authentication server separate from the application server. The application server asks the authentication server if an IP has permissions. That way they remain in wholly separate processes. This might also have security benefits, your application code can't access your authentication database.
Since any given application is likely to expand to the point where it needs a SQL database of its own, this exercise is probably futile, but your call.
This is a bunch of extra work and maintenance and complexity. It's only worth while if it's causing you real memory problems, not just because it's bugs you. Remember, RAM is very cheap. Developer time is very expensive.

Related

How do I disable Devel::Cover for forked child processes?

I noticed, that when I run my program with perl -MDevel::Cover=-silent,-nogcov foo.pl to collect coverage information for foo.pl, I am getting massive slowdowns from parts of my program that fork and exec non-perl programs like tar, gzip or dpkg-deb. Thanks to this question I figured out how to disable Devel::Cover selectively, so I'm now writing:
my $is_covering = !!(eval 'Devel::Cover::get_coverage()');
my $pid = fork();
if ($pid == 0) {
eval 'Devel::Cover::set_coverage("none")' if $is_covering;
exec 'tar', '-cf', ...
}
Doing so, shaves off five minutes of runtime per test which, for 122 tests saves me 10 hours of computation time.
Unfortunately, I cannot always add this eval statement into the forked child process. For example it's impossible to do so when I use system(). I want to avoid rewriting each of my system() calls to a manual fork/exec.
Is there a way to disable Devel::Cover for my forked processes or basically for everything that is not my script foo.pl?
Thanks!
Forks::Super is kind of heavy, but it has the feature of post-fork callbacks that are executed after each fork but before any other code in a child process is executed.
use Forks::Super;
my $is_covering = !!(eval 'Devel::Cover::get_coverage()');
POSTFORK_CHILD {
# runs in every child process immediately after fork()
eval 'Devel::Cover::set_coverage("none")' if $is_covering;
};
...
I suspect your problem is not the fork per se, but rather the exec. The difference is somewhat academic but might lead to a possible solution. If you don't mind compiling your own version of Devel::Cover you could try commenting out this line: https://github.com/pjcj/Devel--Cover/blob/05392f3062dd2bdbf019d9a8fbae1b152b97d862/Cover.xs#L1140
This will cause any coverage data collected before an exec call to be lost and speed up the exec call.
If you can't compile your own version, adding local *Devel::Cover::_report = sub { }; before the exec calls should also speed up the execs but this is ultimately a similar solution to what you have already with the disadvantage of not using a published API.

What are the Perl techniques to detach just a portion of code to run independently?

I'm not involved in close-to-OS programming techniques, but as I know, when it comes to doing something in parallel in Perl the weapon of choice is fork and probably some useful modules built upon it. The doc page for fork says:
Does a fork(2) system call to create a new process running the same program at the same point.
As a consequence, having a big application that consumes a lot of memory and calling fork for a small task means there will be 2 big perl processes, and the second will waste resources just to do some simple work.
So, the question is: what to do (or how to use fork, if it's the only method) in order to have a detached portion of code running independently and consuming just the resources it needs?
Just a very simpel example:
use strict;
use warnings;
my #big_array = ( 1 .. 2000000 ); # at least 80 MB memory
sleep 10; # to have time to inspect easely the memory usage
fork();
sleep 10; # to have time to inspect easely the memory usage
and the child process consumes 80+ MB too.
To be clear: it's not important to communicate to this detached code or to use its result somehow, just to be possible to say "hey, run for me this simple task in the background and let me continue my heavy work meanwhile ... and don't waste my resources!" when running a heavy perl application.
fork() to exec() is your bunny here. You fork() to create a new process (which is a fairly cheap operation, see below), then exec() to replace the big perl you've got running with something smaller. This looks like this:
use strict;
use warnings;
use 5.010;
my #ary = (1 .. 10_000_000);
if (my $pid = fork()) {
# parent
say "Forked $pid from $$; sleeping";
sleep 1_000;
} else {
# child
exec('perl -e sleep 1_000');
}
(#ary was just used to fill up the original process' memory a bit.)
I said that fork()ing was relatively cheap, even though it does copy the entire original process. These statements are not in conflict; the guys who designed fork noticed this same problem. The copy is lazy, that is, only the bits that are actually changed are copied.
If you find you want the processes to talk to each other, you'll start getting into the more complex domain of IPC, about which a number of books have been written.
Your forked process is not actually using 80MB of resident memory. A large portion of that memory will be shared - 'borrowed' from the parent process until either the parent or child writes to it, at which point copy-on-write semantics will cause the memory to actually be copied.
If you want to drop that baggage completely, run exec in your fork. That will replace the child Perl process with a different executable, thus freeing the memory. It's also perfect if you don't need to communicate anything back to the parent.
There is no way to fork just a subset of your process's footprint, so the usual workarounds come down to:
fork before you run memory intensive code in the parent process
start a separate process with system or open HANDLE,'|-',.... Of course this new process won't inherit any data from its parent, so you will need to pass data to this child somehow.
fork() as implemented on most operating systems is nicely efficient. It commonly uses a technique called copy-on-write, to mean that pages are initially shared until one or other process writes to them. Also a lot of your process memory is going to be readonly mapped files anyway.
Just because one process uses 80MB before fork() doesn't mean that afterwards the two will use 160. To start with it will be only a tiny fraction more than 80MB, until each process starts dirtying more pages.

threads in Dancer

I'm using Dancer 1.31, in a standard configuration (plackup/Starman).
In a request I wished to call a perl function asynchronously, so that the request returns inmmediately. Think of the typical "long running operation" scenario, in which one wants to return a "processing page" with a refresh+redirect.
I (naively?) tried with a thread:
sub myfunc {
sleep 9; # just for testing a slow operation
}
any '/test1' => sub {
my $thr = threads->create('myfunc');
$thr->detach();
return "done" ;
};
I does not work, the server seems to freeze, and the error log does not show anything. I guess manual creation of threads are forbidden inside Dancer? It's an issue with PSGI? Which is the recommended way?
I would stay away from perl threads especially in a web server environment. It will most likely crash your server when you join or detach them.
I usually create a few threads (thread pool) BEFORE initializing other modules and keep them around for the entire life time of the application. Thread::Queue nicely provides communication between the workers and the main thread.
The best asynchronous solution I find in Perl is POE. In Linux I prefer using POE::Wheel::Run to run executables and subroutines asynchronously. It uses fork and has a beautiful interface allowing communication with the child process. (In Windows it's not usable due to thread dependency)
Setting up Dancer and POE inside the same application/script may cause problems and POE's event loop may be blocked. A single worker thread dedicated to POE may come handy, or I would write another server based on POE and just communicate with the Dancer application via sockets.
Threads are definitively iffy with Perl. It might be possible to write some threaded Dancer code, but to be honest I don't think we ever tried it. And considering that Dancer 1's core use simpleton classes, it might also be very tricky.
As Ogla says, there are other ways to implement asynchronous behavior in Dancer. You say that you are using Starman, which is a forking engine. But there is also Twiggy, which is AnyEvent-based. To see how to leverage it to write asynchronous code, have a gander at Dancer::Plugin::Async.

Inline::Java conflicts with Parallel::ForkManager

I am having problem with calling both Parallel::ForkManager and Inline::Java at the same time. Specifically, if I call the Inline::Java with the JNI => 1 option (which I have to), then the fork process doesn't come back to the parent. Here are the codes:
use Parallel::ForkManager;
##### Calling Inline::Java #####
use Inline Java => <<END, JNI => 1;
END
###### End of Inline::Java #####
my $pm = Parallel::ForkManager->new(2);
for my $i (0..1) {
$pm->start and next;
print "Inside process $i\n";
$pm->finish;
}
$pm->wait_all_children;
print "Back to Parent.\n";
If I run this program, it goes into the child processes but never comes back to the parent. If I remove the 3 lines between the comments, it runs fine. If I change the JNI => 1 to JNI => 0 (not that I'm allowed to change that parameter for my purpose), then there is an error message of Lost connection with Java virtual machine at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/Inline/Java.pm line 975.
Does anyone have a clue how to resolve the conflict? I also have to call the Inline::Java before the parallel process so using require after parallel is done is not an option. Thx!
Every child is talking over the same socket, which leads to the VM receiving jibberish.
You need to delay making a connection to the VM so that it's done in the children instead of it the parent.
You could move all thing Inline::Java-related into another module, then use require Child; (not use Child;) after start.
If you need to use Inline::Java before launching the child, do it in a different process.
Using forks with Inline::Java is going to be a problem. Your perl script needs to maintain a TCP connection with the JVM. When you fork a new process, the same file descriptors for communicating with the JVM are passed to the child process so the parent and all the child processes are using the same sockets. That won't work. You need to redesign your application.
One possibility (which you have already discounted) is to delay starting the JVM until after a fork, starting a new JVM in each child process.
Another approach is to forget about forking from Perl and to leverage Java's superior threading model to do parallelization. Design your Java code to perform its tasks in new threads, and start new threads from Perl:
my $java = ... entry point to JVM ...
for my $n (1 .. $num_threads) {
$java->startNewThread(#inputs)
}
$java->waitForThreadsToFinish();
$result = $java->getResults();
Perl also has its own threading model (see threads and threads::shared). I doubt that Perl's threads will work for this problem, but it still might be worth a try.
Update: another possibility that is mentioned in the Inline::Java docs is to use a shared JVM. Invoke Inline::Java with the option SHARED_JVM => 1, and when a new child process starts, call Inline::Java::reconnect_JVM() from the child to make a fresh connection. The downsides of this approach are
it keeps the JVM active after the program ends, so you have to remember to kill the JVM
it is incompatible with the option JNI => 1, which might be a dealbreaker to the OP.

Is there a way to have managed processes in Perl (i.e. a threads replacement that actually works)?

I have a multithreded application in perl for which I have to rely on several non-thread safe modules, so I have been using fork()ed processes with kill() signals as a message passing interface.
The problem is that the signal handlers are a bit erratic (to say the least) and often end up with processes that get killed in inapropriate states.
Is there a better way to do this?
Depending on exactly what your program needs to do, you might consider using POE, which is a Perl framework for multi-threaded applications with user-space threads. It's complex, but elegant and powerful and can help you avoid non-thread-safe modules by confining activity to a single Perl interpreter thread.
Helpful resources to get started:
Programming POE presentation by Matt Sergeant (start here to understand what it is and does)
POE project page (lots of cookbook examples)
Plus there are hundreds of pre-built POE components you can use to assemble into an application.
You can always have a pipe between parent and child to pass messages back and forth.
pipe my $reader, my $writer;
my $pid = fork();
if ( $pid == 0 ) {
close $reader;
...
}
else {
close $writer;
my $msg_from_child = <$reader>;
....
}
Not a very comfortable way of programming, but it shouldn't be 'erratic'.
Have a look at forks.pm, a "drop-in replacement for Perl threads using fork()" which makes for much more sensible memory usage (but don't use it on Win32). It will allow you to declare "shared" variables and then it automatically passes changes made to such variables between the processes (similar to how threads.pm does things).
From perl 5.8 onwards you should be looking at the core threads module. Have a look at http://metacpan.org/pod/threads
If you want to use modules which aren't thread safe you can usually load them with a require and import inside the thread entry point.