So when I run this code it seems to fork bomb the system can you guys help me out? All I want to do is start a thread for each one of the appWatch domains and enviroments.
#!/usr/bin/perl
#
#
# Starts the mass processes to watch each directory & enviroment.
#
#
#
###################################################################################
use strict;
use warnings;
use POSIX 'setsid';
setsid();
my #domains = (qw(austin batman luke heman drevil joker skeltor drevil goodguy badguy));
my #envs = (qw(qa dev));
foreach my $env (#envs){
foreach my $guy (#domains){
unless(my $pid = fork()){
system("echo $env.$guy");
system("sleep 10 ");
#system("./appWatch -d $guy -e $env");
open PID, ">>pid.lock";
print PID $$ . "\n";
print "$$ is Parent, $pid is child";
}
}
}
wait();
Your code should only create three children. If you are seeing a bunch of children being created then you are running different code (or the culprit is appWatch not your code). On a slightly unrelated note, there are a couple things you should probably be doing differently:
fork has three possible return values, not two
you must reap your children or set the system up to reap them for you
you should use exec instead of system if you don't want to return to the code
you should use the multiple argument version of system and exec instead of the one argument version if you don't want the shell to do stuff with the arguments.
Here is my version of your code:
$SIG{CHLD} = "IGNORE"; #auto-reap the children
my #domains = qw(domains);
my #envs = qw(enviromentA enviromentB);
for my $env (#envs){
for my $guy (#domains){
die "could not fork: $!" unless defined(my $pid = fork);
next if $pid;
exec "./appWatch", "-d", $guy, "-e", $env;
die "exec must have failed";
}
}
You updated version of the code shows what happened. Your child does not exit. Here is how I would write your code:
#!/usr/bin/perl
# Starts the mass processes to watch each directory & enviroment.
use strict;
use warnings;
use POSIX 'setsid';
setsid();
my #domains = qw(
austin batman luke heman
drevil joker skeltor drevil
goodguy badguy
);
my #envs = qw(qa dev);
my #pids;
for my $env (#envs){
for my $guy (#domains){
die "could not fork: $!" unless defined(my $pid = fork);
if ($pid) {
push #pids, $pid;
next;
}
print "$env.$guy\n";
sleep 10; #FIXME: I don't know if you really need this
#exec will replace the child process with appWatch
exec "./appWatch", "-d", $guy, "-e", $env;
die "exec failed for some reason";
}
}
for my $pid (#pids) {
waitpid $pid, 0;
}
With
$ cat appWatch
#! /usr/bin/perl -l
print "[", join("][" => #ARGV), "]";
running on
$ uname -a
Linux mybox 2.6.32-24-generic #39-Ubuntu SMP Wed Jul 28 05:14:15 UTC 2010 x86_64 GNU/Linux
I get no fork bomb, just an unexciting Cartesian product:
$ ./prog.pl
[-d][domains][-e][enviromentA]
[-d][domains][-e][enviromentB]
Related
My intention is to execute long.pl perl script with different path as an argument and since long.pl has indefinite loop such that in the main script it does not come to second path. I thought to use fork for doing it, but I'm not sure whether it will solve my problem or not!
Some information on the method of achieving the task would be helpful, and please let me know if you need any clarification on the problem statement.
#!/usr/bin/perl
use strict;
use warnings;
print localtime () . ": Hello from the parent ($$)!\n";
my #paths = ('C:\Users\goudarsh\Desktop\Perl_test_scripts','C:\Users\goudarsh\Desktop\Perl_test_scripts/rtl2gds');
foreach my $path(#paths){
my $pid = fork;
die "Fork failed: $!" unless defined $pid;
unless ($pid) {
print localtime () . ": Hello from the child ($$)!\n";
exec "long.pl $path"; # Some long running process.
die "Exec failed: $!\n";
}
}
long.pl
#!/usr/bin/perl
use strict;
use warnings;
while(1){
sleep 3;
#do some stuff here
}
Example run:
$ perl my_forker.pl
Done with other process.
Done with long running process.
Done with main process.
The following files must have executable permissions set:
long_running.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 5;
say 'Done with long running process.';
other_process.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 3;
say "Done with other process."
my_forker.pl:
use strict;
use warnings;
use 5.020;
my #paths = (
'./long_running.pl',
'./other_process.pl',
);
my #pids;
for my $cmd (#paths) {
defined (my $pid = fork()) or die "Couldn't fork: $!";
if ($pid == 0) { #then in child process
exec $cmd;
die "Couldn't exec: $!"; #this line will cease to exist if exec() succeeds
}
else { #then in parent process, where $pid is the pid of the child
push #pids, $pid;
}
}
for my $pid (#pids) {
waitpid($pid, 0) #0 => block
}
say "Done with main process.";
I'm using some idioms from Lincoln Stein's fine Network Programming With Perl book to write a server. I seem to be getting weird behaviors for a variable that is declared prior to a fork and referenced afterwards.
Here is a complete program illustrating the problem. (I apologize for its not being more stripped-down; when I stripped out all the stuff I thought was irrelevant, the problem went away.) If you look for ##### MYSTERY ##### you'll see two versions of the declaration my $pid. One version works, while the other doesn't. Following the call to become_daemon(), which assigns the child PID to $pid, I attempt to write it to a PID file, and then verify that this worked. Depending on which method of declaring it I used, it either succeeds or it fails. I don't get it!
#!/usr/bin/perl
#
# Prototype contactd master server
use warnings;
use strict;
use Carp;
use Getopt::Std;
use File::Basename;
use IO::Socket;
use IO::File;
use Net::hostent; # for OO version of gethostbyaddr
use POSIX qw{WNOHANG setsid};
use Data::Dumper;
#use 5.010;
sub say { print "#_\n"; }
my $program = basename $0;
my $default_config = "$program.config";
$| = 1; # flush STDOUT buffer regularly
my %opts;
my $config_file = $opts{c} || $default_config;
# Process the config file to obtain default settings
#
# Note: for now we'll hard code config values into the config hash.
#
my %config;
$config{PORT} = 2000;
$config{DAEMONIZE} = 0;
$config{VERBOSE} = 0;
$config{LOGDIR} = "/mxhome/charrison/private/wdi/logs";
$config{PIDFILE} = "/var/tmp/$program.pid";
# Process command line args to override default settings
#
my $server_port = $opts{p} || $config{PORT};
my $log_dir = $opts{l} || $config{LOGDIR};
my $verbose = !!( exists $opts{v} || $config{VERBOSE} );
my $daemonize = !!( exists $opts{d} || $config{DAEMONIZE} );
my $pid_file = $opts{P} || $config{PIDFILE};
################################################################################
# Set up signal handlers
#
# Caution: these call the logging manager servlog(), which has not yet been
# spawned.
################################################################################
# Set up a child-reaping subroutine for SIGCHLD
#
$SIG{CHLD} = sub {
local ( $!, $^E, $# );
while ( ( my $kid = waitpid( -1, WNOHANG ) ) > 0 ) {
}
};
# Set up a signal handler for interrupts
#
my $quit = 0;
$SIG{INT} = sub {
$quit++;
};
# Set up signal handler for pipe errors
#
$SIG{PIPE} = sub {
local ( $!, $^E, $# );
};
################################################################################
# DAEMONIZATION OCCURS HERE
################################################################################
my $pid_fh = open_pid_file($pid_file);
##### MYSTERY #####
my $pid; # this makes it work
# my $pid = $$; # this breaks it!
$daemonize = 1; # inserted here for demo
if ($daemonize) {
say "Becoming a daemon and detaching from your terminal. Bye!";
$pid = become_daemon(); # update w/new pid
}
say "Here is pid: $pid. Going to write it to $pid_file and close.";
# If we daemonized, then we are now executing with a different PID
#
# And in that case, the following fails silently!!
#
print $pid_fh $pid; # store our PID in known location in filesystem
close $pid_fh;
say "Boo boo" if !-e $pid_file;
say qx{cat $pid_file};
##### END OF DEMO #####
# open_pid_file()
#
# Courtesy of Network Programming with Perl, by Lincoln D. Stein
#
sub open_pid_file {
my $file = shift;
if ( -e $file ) { # PID file already exists
my $fh = IO::File->new($file) || return;
my $pid = <$fh>; # so read it and probe for the process
croak "Server already running with PID $pid\n" # die ...
if kill 0 => $pid; # if it responds
warn "Removing PID file for defunct server process $pid.";
croak "Can't unlink PID file $file" # die ...
unless -w $file && unlink $file; # if can't unlink
}
return IO::File->new( $file, O_WRONLY | O_CREAT | O_EXCL, 0644 )
or die "Can't create PID file $file: $!\n";
}
# become_daemon()
#
# Courtesy of Network Programming with Perl, by Lincoln D. Stein
#
sub become_daemon {
die "Can't fork" unless defined( my $child = fork );
exit 0 if $child != 0; # die here if parent
# --- PARENT PROCESS DIES
# --- CHILD PROCESS STARTS
setsid(); # Become session leader
open( STDIN, "</dev/null" );
# servlog() writes to STDOUT which is being piped to log manager
#
#open( STDOUT, ">/dev/null" );
open( STDERR, ">&STDOUT" );
chdir '/'; # go to root directory
umask(0); # ??
$ENV{PATH} = '/bin:/sbin:/use/bin:/usr/sbin';
return $$;
}
END {
unlink $pid_file if $pid == $$; # only the daemon unlinks pid file
}
At the end of your code, you have:
END {
unlink $pid_file if $pid == $$; # only the daemon unlinks pid file
}
This works fine if $pid is undefined in the parent process. But if you initialize it to the parent process's ID, then the parent will unlink the PID file as soon as become_daemon calls exit.
(It seems to me that there's a race here between the child writing the PID file and the parent unlinking it, so the outcome might not always be the same.)
Edit: Actually, there is no race, since the PID file is opened before the fork. So the parent process opens the file, forks the child, unlinks the file and exits. The child still has a handle on the file and it can still write to it, but the file is no longer linked from anywhere in the filesystem, and it will disappear as soon as the child process exits.
I am trying to run a background process in perl. I create a child process, which is used to call another perl script. I want to run few lines of code parallely with this child process. And after the child process is done.I want to print a line of code.
Main script
#!/usr/bin/perl
$|=1;
print "before the child process\n";
my $pid = fork();
if (defined $pid)
{
system("perl testing.pl");
}
print "before wait command\n";
wait();
print "after 20 secs of waiting\n";
testing.pl
#!/usr/bin/perl
print "inside testing\n";
sleep(20);
Expected output
before the child process
before wait command
(should wait for 20 secs and then print)
after 20 secs of waiting
There are many problems with your script. Always:
use strict;
use warnings;
localising special variables is a good practice. Only a variable containing the special value undef returns false for defined. So, every other value (even a 0; which is the case here) returns true for defined. In the other script, the shebang is wrong.
#!/usr/bin/perl
use strict;
use warnings;
local $| = 1;
print "Before the child process\n";
unless (fork) {
system("perl testing.pl");
exit;
}
print "Before wait command\n";
wait;
print "After 20 secs of waiting\n";
The “Background Processes” section of the perlipc documentation reads
You can run a command in the background with:
system("cmd &");
The command’s STDOUT and STDERR (and possibly STDIN, depending on your shell) will be the same as the parent’s. You won't need to catch SIGCHLD because of the double-fork taking place; see below for details.
Adding an ampersand to the argument to system in your program can vastly simplify your main program.
#! /usr/bin/env perl
print "before the child process\n";
system("perl testing.pl &") == 0
or die "$0: perl exited " . ($? >> 8);
print "before wait command\n";
wait;
die "$0: wait: $!" if $? == -1;
print "after 20 secs of waiting\n";
fork return value handling is a bit tricky, indeed.
Recent article by Aristotle features a nice and concise forking idiom, which, in your case, looks like:
#!/usr/bin/env perl
use 5.010000;
use strict;
use warnings qw(all);
say 'before the child process';
given (fork) {
when (undef) { die "couldn't fork: $!" }
when (0) {
exec $^X => 'testing.pl';
} default {
my $pid = $_;
say 'before wait command';
waitpid $pid, 0;
say 'after 20 secs of waiting';
}
}
Pay attention to exec $^X => '...' line: the $^X variable holds the full path to the current Perl executable, so the "right Perl version" will be guaranteed. Also, system call is pointless when you're pre-forking.
I need to modify an existing Perl program. I want to pipe a string (which can contain multiple lines) through an external program and read the output from this program. This external program is used to modify the string. Let's simply use cat as a filter program. I tried it like this, but it doesn't work. (Output of cat goes to STDOUT instead of being read by perl.)
#!/usr/bin/perl
open(MESSAGE, "| cat |") or die("cat failed\n");
print MESSAGE "Line 1\nLine 2\n";
my $message = "";
while (<MESSAGE>)
{
$message .= $_;
}
close(MESSAGE);
print "This is the message: $message\n";
I've read that this isn't supported by Perl because it may end up in a deadlock, and I can understand it. But how do I do it then?
You can use IPC::Open3 to achieve bi-directional communication with child.
use strict;
use IPC::Open3;
my $pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR, 'cat')
or die "open3() failed $!";
my $r;
for(my $i=1;$i<10;$i++) {
print CHLD_IN "$i\n";
$r = <CHLD_OUT>;
print "Got $r from child\n";
}
This involves system programming, so it’s more than a basic question. As written, your main program doesn’t require full-duplex interaction with the external program. Dataflow travels in one direction, namely
string → external program → main program
Creating this pipeline is straightforward. Perl’s open has a useful mode explained in the “Safe pipe opens” section of the perlipc documentation.
Another interesting approach to interprocess communication is making your single program go multiprocess and communicate between—or even amongst—yourselves. The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you’ve opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
This is an open that involves a pipe, which gives nuance to the return value. The perlfunc documentation on open explains.
If you open a pipe on the command - (that is, specify either |- or -| with the one- or two-argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
To create the scaffolding, we work in right-to-left order using open to fork a new process at each step.
Your main program is already running.
Next, fork a process that will eventually become the external program.
Inside the process from step 2
First fork the string-printing process so as to make its output arrive on our STDIN.
Then exec the external program to perform its transformation.
Have the string-printer do its work and then exit, which kicks up to the next level.
Back in the main program, read the transformed result.
With all of that set up, all you have to do is implant your suggestion at the bottom, Mr. Cobb.
#! /usr/bin/env perl
use 5.10.0; # for defined-or and given/when
use strict;
use warnings;
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel {
given (open(STDIN, "-|") // die "$0: fork: $!") { # / StackOverflow hiliter
snow_fortress when 0;
exec #transform or die "$0: exec: $!";
}
}
given (open(my $fh, "-|") // die "$0: fork: $!") {
hotel when 0;
print while <$fh>;
close $fh or warn "$0: close: $!";
}
Thanks for the opportunity to write such a fun program!
You can use the -n commandline switch to effectively wrap your existing program code in a while-loop... look at the man page for -n:
LINE:
while (<>) {
... # your program goes here
}
Then you can use the operating system's pipe mechanism directly
cat file | your_perl_prog.pl
(Edit)
I'll try to explain this more carefully...
The question is not clear about what part the perl program plays: filter or final stage. This works in either case, so I will assume it is the latter.
'your_perl_prog.pl' is your existing code. I'll call your filter program 'filter'.
Modify your_perl_prog.pl so that the shebang line has an added '-n' switch: #!/usr/bin/perl -n or #!/bin/env "perl -n"
This effectively puts a while(<>){} loop around the code in your_perl_prog.pl
add a BEGIN block to print the header:
BEGIN {print "HEADER LINE\n");}
You can read each line with '$line = <>;' and process/print
Then invoke the lot with
cat sourcefile |filter|your_perl_prog.pl
I want to expand on #Greg Bacon's answer without changing it.
I had to execute something similar, but wanted to code without
the given/when commands, and also found there was explicit exit()
calls missing because in the sample code it fell through and exited.
I also had to make it also work on a version running ActiveState perl,
but that version of perl does not work.
See this question How to read to and write from a pipe in perl with ActiveState Perl?
#! /usr/bin/env perl
use strict;
use warnings;
my $isActiveStatePerl = defined(&Win32::BuildNumber);
sub pipeFromFork
{
return open($_[0], "-|") if (!$isActiveStatePerl);
die "active state perl cannot cope with dup file handles after fork";
pipe $_[0], my $child or die "cannot create pipe";
my $pid = fork();
die "fork failed: $!" unless defined $pid;
if ($pid) { # parent
close $child;
} else { # child
open(STDOUT, ">&=", $child) or die "cannot clone child to STDOUT";
close $_[0];
}
return $pid;
}
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel
{
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open STDIN, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
snow_fortress;
exit(0);
}
open(STDIN, "<&", $fh) or die "cannot clone to STDIN";
exec #transform or die "$0: exec: $!";
}
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open my $fh, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
hotel;
exit(0);
}
print while <$fh>;
close $fh or warn "$0: close: $!";
the simplest -- not involving all these cool internals -- way to do what the OP needs, is to use a temporary file to hold the output until the external processor is done, like so:
open ToTemp, "|/usr/bin/tac>/tmp/MyTmp$$.whee" or die "open the tool: $!";
print ToTemp $TheMessageWhateverItIs;
close ToTemp;
my $Result = `cat /tmp/MyTmp$$.whee`; # or open and read it, or use File::Slurp, etc
unlink "/tmp/MyTmp$$.whee";
Of course, this isn't going to work for something interactive, but co-routines appear to be out of the scope of the original question.
I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period.
If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file
and calling this Perl file (alarm.pl) in the original Perl File using the system command.
But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File.
Could anybody please let me know the way to terminate a system() call or take the output in that way I used above?
main.pl
my #output = system("alarm.pl");
print"one iperf completed\n";
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
alarm.pl
alarm 30;
my #output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001");
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
In both ways display.txt is always empty.
There are a few separate issues here.
First, to keep the alarm from killing your script, you need to handle the ALRM signal. See the alarm documentation. You shouldn't need two scripts for this.
Second, system doesn't capture output. You need one of the backtick variants or a pipe if you want to do that. There are answers for that on Stackoverflow already.
Third, if alarm.pl puts anything in display.txt, you discard it in main.pl when you re-open the file in write mode. You only need to create the file in one place. When you get rid of the extra script, you won't have this problem.
I recently had some problems with alarm and system, but switching to IPC::System::Simple fixed that.
Good luck, :)
What the hell was I thinking? You don't need a background process for this task. You just need to follow the example in the perldoc -f alarm function and wrap your time-sensitive code in an eval block.
my $command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 30;
#output = `$command`;
alarm 0;
};
if ($#) {
warn "$command timed out.\n";
} else {
print "$command successful. Output was:\n", #output;
}
Inside the eval block, you can capture your output the regular way (with backticks or qx() or readpipe). Though if the call times out, there won't be any output.
If you don't need the output (or don't mind hacking some interprocess communication together), an almost idiot-proof alternative is to set the alarm and run the system call in a child process.
$command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
if (($pid = fork()) == 0) {
# child process
$SIG{ALRM} = sub { die "Timeout\n" }; # handling SIGALRM in child is optional
alarm 30;
my $c = system($command);
alarm 0;
exit $c >> 8; # if you want to capture the exit status
}
# parent
waitpid $pid, 0;
waitpid will return when either the child's system command is finished, or when the child's alarm goes off and kills the child. $? will hold the exit code of the system call, or something else (142 on my system) for an unhandled SIGALRM or 255 if your SIGALRM handler calls die.
I run into a similar problem that requires:
run a system command and get its output
time out the system command after x seconds
kill the system command process and all child processes
After much reading about Perl IPC and manual fork & exec, I came out with this solution. It is implemented as a simulated 'backtick' subroutine.
use Error qw(:try);
$SIG{ALRM} = sub {
my $sig_name = shift;
die "Timeout by signal [$sig_name]\n";
};
# example
my $command = "vmstat 1 1000000";
my $output = backtick(
command => $command,
timeout => 60,
verbose => 0
);
sub backtick {
my %arg = (
command => undef,
timeout => 900,
verbose => 1,
#_,
);
my #output;
defined( my $pid = open( KID, "-|" ) )
or die "Can't fork: $!\n";
if ($pid) {
# parent
# print "parent: child pid [$pid]\n" if $arg{verbose};
try {
alarm( $arg{timeout} );
while (<KID>) {
chomp;
push #output, $_;
}
alarm(0);
}
catch Error with {
my $err = shift;
print $err->{-text} . "\n";
print "Killing child process [$pid] ...\n" if $arg{verbose};
kill -9, $pid;
print "Killed\n" if $arg{verbose};
alarm(0);
}
finally {};
}
else {
# child
# set the child process to be a group leader, so that
# kill -9 will kill it and all its descendents
setpgrp( 0, 0 );
# print "child: pid [$pid]\n" if $arg{verbose};
exec $arg{command};
exit;
}
wantarray ? #output : join( "\n", #output );
}
Might use "timeout -n " for wrapping your commands if thats already common on your system.