Parallel execution of command using Parallel::ForkManager - perl

I want to know whether my understanding is right or not for the below script/logic.
I have list of nodes and I need to run a certain command on each of the node by utilizing number of servers which I have by doing SSH to the servers, means the process should happen parallelly.
I have node_list.txt file which contains list of nodes:
node1
node2
.
.
node49
node50
I have defined number of servers in an array #hosts where I should do SSH and execute the command to each node by splitting the node_file.txt into equal number of parts(called $node_list_X.txt) in an available servers.
Once I have these files (node_list_1.txt,node_list_2.txt,node_list_3.txt,node_list_4.txt) I will be logging into each the server which is already been defined and executing certain commands on each hosts by passing node_list_X.txt file parallelly.
To execute this parallelly I am using Parallel::ForkManager Perl module.
So that, lets say in each host -
192.168.0.1 -> node_list_1.txt (13 nodes)
192.168.0.2 -> node_list_2.txt (13 nodes)
192.168.0.3 -> node_list_3.txt (12 nodes)
192.168.0.4 -> node_list_4.txt (12 nodes)
will run parallelly.
Script is below:
...
my #hosts = ("192.168.0.1", "192.168.0.2", "192.168.0.3","192.168.0.4");
open(my $node_fh, '<', $node_file)
or die "can't open $node_file: $!";
my #lines = <$node_fh>;
my %Files;
my $num_buckets = scalar #hosts;
my $per_bucket = int( #lines / $num_buckets );
my $num_extras = #lines % $num_buckets;
my $path = "/home/user/vinod/test/";
for my $bucket_num (0..$num_buckets-1) {
my $num_lines = $per_bucket;
if ($num_extras) {
++$num_lines;
--$num_extras;
}
last if($num_lines == 0);
my $qfn = $path."node_list_${bucket_num}.txt";
open(my $fh, '>', $qfn)
or die("Can't create \"$qfn\": $!\n");
$fh->print(splice(#lines, 0, $num_lines));
$Files{$bucket_num} = $qfn;
}
print Dumper(\%Files);
my $command = #"defining my command here";
my $pm = Parallel::ForkManager->new(5);
my $ssh;
DATA_LOOP:
foreach my $n (0..$num_buckets-1) {
if( exists $Files{$n} ) {
my $pid = $pm->start and next DATA_LOOP;
$command_to_execute = $command." ".$Files{$n};
$ssh = SSH_Connection( $hosts[$n-1], "user", "password" );
$result = $ssh->capture($command_to_execute);
$pm->finish;
}
}
$pm->wait_all_children;
undef $ssh;
#SSH Connect
sub SSH_Connection {
my ( $host, $user, $passwd ) = #_;
my $ssh = Net::OpenSSH->new($host,
user => $user,
password => $passwd,
master_opts => [-o => "StrictHostKeyChecking=no"]
);
$ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
return $ssh;
}
Here everything works fine.
When I am defining $pm object, parallel process set to 5.
my $pm = Parallel::ForkManager->new(5);
Does this means at a time in a particular server (Ex:192.168.0.1) it should run 5 parallel process. Means it should take 5 nodes from a node_list_1.txt (out of 13) file and execute the command?
Is my understdning correct? If not, what could be the possible solution to run the command in each server parallelly with multi-threading?

Does this means at a time in a perticular server (Ex:192.168.0.1) it should run 5 parallel process.
No. P::FM doesn't know anything about servers. It manages processes, and ->new(5) means ->start will wait for one of the processes it created to finish before creating a new one if 5 of them are still executing.
what could be the possible solution to run the command in each server parallelly with multi-threading?
Assuming you meant multi-tasking generally rather than multi-threading specifically (since you aren't using threads), create a process for each host could be done as follows:
my %children;
my $error = 0;
for my $host (#hosts) {
my $pid = fork();
if (!defined($pid)) {
warn("Can't execute on $host: Can't fork: $!\n");
next;
}
if ($pid) {
++$children{$pid};
next;
}
if (!eval {
do_it($host);
return 1; # No exception
}) {
warn("Error executing commands on $host: $#");
}
}
while (%children) {
( my $pid = wait() ) >= 0
or die("Can't wait: $!\n");
delete($children{$pid});
}

If you want to run jobs on a bunch of different servers, consider a proper job queue. Perl's Minion is very nice. Various servers can connect to it, ask for jobs in various ways, and send back the results.

Have you consider using Net::OpenSSH::Parallel?
It seems to me that it directly supports what you want to do and is able to handle lots of connections in parallel, schedule then, handle errors and retry failed commands, etc.
Update: But will it allow me to run jobs parallelly inside each host?
But what do you really want to do? distribute jobs over a set of workers? In that case, brian d foy solution is probably a better option.
Anyway, Net::OpenSSH::Parallel was never intended for that, but yet it can do it:
my #hosts = ...;
my #tasks = ...;
my $n_workers = 5;
my $ossh = Net::OpenSSH::Parallel->new;
for my $host (#hosts) {
for my $ix (0..$n_workers) {
$ossh->add_host("$host-$ix", host => $host);
}
}
my $fetch_task = sub {
my ($pssh, $label) = #_;
if (defined (my $task = shift #tasks)) {
$ossh->push($label, cmd => $task);
$ossh->push($label, sub => $fetch_task);
}
}
$ossh->push('*', sub => $fetch_task)
$ossh->run

Related

Better way to handle perl sockets to read/write to active proccess

First of all I would thank you guys not offering a work around as a solution (although it would be cool to know other ways to do it). I was setting up tg-master project (telegram for cli) to be used by check_mk alert plugin. I found out that telegram runs on a stdin/stdout proccess so I tought it would be cool to "glue" it, so i wrote with a lot of building blocks from blogs and cpan the next 2 pieces of code. They already work (i need to handle broken pipes sometimes) but I was wondering if sharing this could come from some experts new ideas.
As you could see my code relies on a eval with a die reading from spawned process, and I know is not the best way to do it. Any suggestions? :D
Thank you guys
Server
use strict;
use IO::Socket::INET;
use IPC::Open2;
use POSIX;
our $pid;
use sigtrap qw/handler signal_handler normal-signals/;
sub signal_handler {
print "what a signal $!\nlets kill $pid\n";
kill 'SIGKILL', $pid;
#die "Caught a signal $!";
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET(
LocalHost => '0.0.0.0',
LocalPort => '7777',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port 7777\n";
my ( $read_proc, $write_proc );
my ( $uid, $gid ) = ( getpwnam "nagios" )[ 2, 3 ];
POSIX::setgid($gid); # GID must be set before UID!
POSIX::setuid($uid);
$pid = open2( $read_proc, $write_proc, '/usr/bin/telegram' );
#flush first messages;
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) { }
};
while (1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read until \n
my $data = "";
$data = $client_socket->getline();
# write to spawned process stdin the line we got on $data
print $write_proc $data;
$data = "";
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) {
$client_socket->send($_);
}
};
# notify client that response has been sent
shutdown( $client_socket, 1 );
}
$socket->close();
Client
echo "contact_list" | nc localhost 7777
or
echo "msg user#12345 NAGIOS ALERT ... etc" | nc localhost 7777
or
some other perl script =)
If you are going to implement a script that performs both reads and writes from/to different handles, consider using select (the one defined as select RBITS,WBITS,EBITS,TIMEOUT in the documentation). In this case you will totally avoid using alarm with a signal handler in eval to handle a timeout, and will only have one loop with all of the work happening inside it.
Here is an example of a program that reads from both a process opened with open2 and a network socket, not using alarm at all:
use strict;
use warnings;
use IO::Socket;
use IPC::Open2;
use constant MAXLENGTH => 1024;
my $socket = IO::Socket::INET->new(
Listen => SOMAXCONN,
LocalHost => '0.0.0.0',
LocalPort => 7777,
Reuse => 1,
);
# accepting just one connection
print "waiting for connection...\n";
my $remote = $socket->accept();
print "remote client connected\n";
# simple example of the program writing something
my $pid = open2(my $localread, my $localwrite, "sh -c 'while : ; do echo boom; sleep 1 ; done'");
for ( ; ; ) {
# cleanup vectors for select
my $rin = '';
my $win = '';
my $ein = '';
# will wait for a possibility to read from these two descriptors
vec($rin, fileno($localread), 1) = 1;
vec($rin, fileno($remote), 1) = 1;
# now wait
select($rin, $win, $ein, undef);
# check which one is ready. read with sysread, not <>, as select doc warns
if (vec($rin, fileno($localread), 1)) {
print "read from local process: ";
sysread($localread, my $data, MAXLENGTH);
print $data;
}
if (vec($rin, fileno($remote), 1)) {
print "read from remote client: ";
sysread($remote, my $data, MAXLENGTH);
print $data;
}
}
In the real production code you will need to carefully check for errors returned by various function (socket creation, open2, accept, and select).

concurrent login to linux machine using perl

As part of concurrent testing, i have to login to a linux server using telnet concurrently. I need a maximum of 50 logins to the servers. I am able to write a simple script using perl expect but the session ends once it goes to the next item in the loop. Can someone help out on how to do this? ALso the below implementation does the connection serially and not concurrently.
for(my $i = 1; $i <= 5; $i++) {
my $exp = Expect->spawn("telnet abc") or die "Cannot spawn telnet: $!\n";
$exp->expect($timeout, "Login:");
$exp->send("$username\n");
$exp->expect($timeout, "Password:");
$exp->send("$password\n");
}
You should do all the work in Perl:
my %users = (
'name1' => 'pw1',
'name2' => 'pw2',
# ...
'name50' => 'pw50',
);
my #sessions;
while (my ($user, $pass) = each %users) {
my $exp = Expect->spawn("telnet abc")
or die "Cannot spawn telnet: $!\n";
$exp->expect($timeout, "Login:");
$exp->send("$user\n");
$exp->expect($timeout, "Password:");
$exp->send("$pass\n");
push #sessions, $exp;
}
# now that you're all logged in, logout
for my $exp (#sessions) {
$exp->send("exit\n");
$exp->expect('eof'); # I'm not sure this is correct
}

Perl Script cannot fork more than 10 times

My perl code does not allow more than 10 forks. For the following perl code, whenever I use more than 10 machines in the list of machines read in to the script, the perl script only forks 10 processes for 10 machines and for the rest it dies with error:
SSHProcessError The ssh process was terminated. at serverLogin.pl 44
It dies at the line where it says $ssh->waitfor('The authenticity of host*',15);.
PERL SCRIPT:
#!/usr/bin/perl -w
use Net::SSH::Expect;
use Term::ReadKey;
print "please enter filename:\n";
$filename = ReadLine;
chomp $filename;
print "please enter user ID:\n";
$userID = ReadLine;
chomp $userID;
print "please enter password:\n";
ReadMode 'noecho';
$passwordforuser = ReadLine 0;
chomp $passwordforuser;
ReadMode 'normal';
open READFILE,"<","$filename" or die "Could not open file listofmachines\n";
my #listofmachines = <READFILE>;
foreach $machine (#listofmachines)
{
my $pid=fork();
if ($pid){
push(#childprocs,$pid);
}
elsif ( $pid == 0 ) {
my $ssh = Net::SSH::Expect->new (
host => "$machine",
user => "$userID",
password=> "$passwordforuser",
timeout => 25,
raw_pty => 1,
);
my $login_output = $ssh->run_ssh or die "Could not launch SSH\n";
$ssh->waitfor('The authenticity of host*',15);
#print "This output for machine $machine\n";
$ssh->send("yes");
$ssh->waitfor('password: ', 15);
$ssh->send("$passwordforuser");
$ssh->waitfor('$ ', 10);
my #commresult=$ssh->exec("uptime");
print $login_output;
print #commresult;
exit 0;
}
else {
die "Could not Fork()\n";
}
}
foreach(#childprocs){
waitpid($_, 0)
}
Please help. Thanks, nblu.
Your script using Net::OpenSSH::Parallel instead of Net::SSH::Expect.
The number of simultaneous connections is limited to 10 to overcome any resource exhaustion problem as happening in your script (probably PTYs):
#!/usr/bin/perl -w
use Net::OpenSSH::Parallel;
use Term::ReadKey;
print "please enter filename:\n";
$filename = ReadLine;
chomp $filename;
print "please enter user ID:\n";
$userID = ReadLine;
chomp $userID;
print "please enter password:\n";
ReadMode 'noecho';
$passwordforuser = ReadLine 0;
chomp $passwordforuser;
ReadMode 'normal';
open READFILE,"<","$filename" or die "Could not open file listofmachines\n";
my #listofmachines = <READFILE>;
chomp #listofmachines;
my $pssh = Net::OpenSSH::Parallel->new(connections => 10);
$pssh->add_host($_,
user => $userID, password => $passwordforuser,
master_opts => [-o => 'StrictHostKeyChecking=no'])
for #listofmachines;
sub do_ssh_task {
my ($host, $ssh) = #_;
my $output = $ssh->capture('uptime');
print "$host: $output";
}
$pssh->all(parsub => \&do_ssh_task);
$pssh->run;
for my $host (#listofmachines) {
if (my $error = $pssh->get_error($host)) {
print STDERR "remote task failed for host $host: $error\n";
}
}
By default, the remote ssh daemon limits the number of concurrent ssh connections to something like 10 per userid. If that is a problem for you, you will need to change the server configuration...
Perhaps you have a limit to the number of processes you can create? Can you create 30 or more processes in a loop where the children just sleep(60)?
If in fact you have a limit of how many you can do at once, try using Parallel::ForkManager.
If this is from hitting a limit on pseudoterminals, how you set that depends on kernel version; what does uname -a say? also depends on whether the code is using BSD or SysV/UNIX98 ptys. If you see it opening files like /dev/ptyXY where X is one of a-e or p-z, it's the former, and you will have a hard limit of 256 systemwide.
You can change passwords without a pseudoterminal using usermod instead of passwd, but this momentarily exposes the crypted password in the process list; that may be acceptable in your case.

Running AnyEvent under Dancer application

I would like to do some non-blocking SSH to a couple of thousand machines that i'm tracking (my own machines), I have a Dancer application up and running, and I'm willing to use AnyEvent::timer to execute SSH commands asynchronously (each machine has its own polling interval, and I don't want one machine to wait for another to complete with its SSH work).
I'm wondering, what is the best way to act asynchronously in a synchronous environment?
It is not very good idea to run any external commands from within your web scripts.
For one, should your external call block or crash for any reason, it will create bad experience for the user (even it that user is just you).
Then, running external commands as web user may have a lot of security implications - I would think your web user most likely has passwordless ssh set up, doesn't it? What if someone figures out some security hole in your script and manages to use it to ssh into your servers?
Instead, you should create separate service or process which will regularly poll your servers status using ssh (or what else) and save results of that scan into database - Postgres or MySQL.
Then, change your Dancer app to display collected results from database, rather than doing live ssh request. This way it will be very fast and secure.
I could not be a good idea but it is possible. I have a big Dancer application to execute scripts remotely and I'm doing it with fork and Net::SSH2. I tried with thread but there are some modules that are not thread-safe so I recommend to use fork.
I have some comments in my blog http://perlondancer.blogspot.mx/2014/04/executing-remote-commands-from-dancer.html and in this gist is the code example below: https://gist.github.com/johandry/11197516
#!/usr/bin/env perl
use strict;
use warnings;
use Dancer;
use Net::SSH2;
sub execCommand ($$) {
my ( $ssh2, $cmd ) = #_;
my %args=(
timeout => 1_000, # polling timeout
bufsize => 10_240, # read buffer size when polling
);
$ssh2->blocking(1); #needed for ssh->channel
my $chan=$ssh2->channel(); # create SSH2 channel
if ($ssh2->error()) {
return (undef, undef, 100);
}
# exec $cmd (caveat: only one command line can be executed over this channel. No "ls -l;whoami" combo. Use ssh->shell instead.
unless ($chan->exec($cmd)) {
return (undef, undef, 500);
}
# defin polling context: will poll stdout (in) and stderr (ext)
my #poll = ( { handle => $chan, events => ['in','ext'] } );
my %std=(); # hash of strings. store stdout/stderr results
$ssh2->blocking( 0 ); # needed for channel->poll
while(!$chan->eof) { # there still something to read from channel
$ssh2->poll( $args{'timeout'}, [ #poll ] ); # if any event, it will be store into $poll;
my( $n, $buf ); # number of bytes read (n) into buffer (buf)
foreach my $poll ( #poll ) { # for each event
foreach my $ev ( qw( in ext ) ) { #for each stdout/stderr
next unless $poll->{revents}{$ev};
#there are something to read here, into $std{$ev} hash
if( $n = $chan->read( $buf, $args{'bufsize'}, $ev eq 'ext' ) ) { #got n byte into buf for stdout ($ev='in') or stderr ($ev='ext')
$std{$ev}.=$buf;
}
} #done foreach
}
}
$chan->wait_closed(); #not really needed but cleaner
my $exit_code=$chan->exit_status();
$chan->close(); #not really needed but cleaner
$ssh2->blocking(1); # set it back for sanity (future calls)
return ($std{'in'},$std{'ext'},$exit_code);
}
sub execute ($$$$) {
my ($ip, $username, $password, $cmd) = #_;
my $pid = fork();
if ($pid) {
# This is the parent (DANCER)
debug "Process started with PID $pid\n";
} elsif ( $pid == 0 ) {
# This is the child
my $ssh2 = Net::SSH2->new();
$ssh2->connect( $ip ) or debug("Cannot connect to $ip");
my $publicKeyFile = './id_rsa.pub'; # path(setting('appdir'), 'db', 'id_rsa.pub'); # I prefer to copy the public key in your app dir due to permissions issues
my $privateKeyFile = './id_rsa'; # path(setting('appdir'), 'db', 'id_rsa'); # I prefer to copy the private key in your app dir due to permissions issues
if ( $ssh2->auth_publickey( $username, $publicKeyFile, $privateKeyFile, $password ) ) {
my ($stdout, $stderr, $exitcode) = execCommand($ssh2, $cmd);
} else {
debug "Could not authenticate to $ip with $username";
}
$ssh2->disconnect();
} else {
debug "Could not fork: $!\n";
}
}
set logger => "console";
set log => "core";
set show_errors => 1;
get '/uptime/:ip' => sub {
my $username = "the username";
my $password = "the password";
execute(param('ip'), $username, $password, "uptime > /tmp/dancer_example.txt");
return 'uptime is running';
};
dance;
true;
Net::SSH2 can be used asynchronously, but it is quite buggy and crashes often. Forget about using it for running thousands (or just hundreds) of connections in parallel on the same process. It may be ok if you use it wrapped in new processes as recomended by #Johandry, but then you can just run the ssh command using AnyEvent::Util::run_cmd.
Net::OpenSSH is another Perl module that can be used asynchronously. It shouldn't be too difficult to integrate it inside AnyEvent.

Perl read from socket missing first character

I am trying to read from an instrument connected over network using TCP protocol from Perl.
The code I have used is below:
$socket = new IO::Socket::INET (
PeerHost => '210.232.14.204',
PeerPort => '23',
Proto => 'tcp',
) or die "ERROR in Socket Creation";
while(!($data=~m/"ABC"/))
{
$temp = <$socket>;
$data = $data + $temp;
print $temp;
}
What happens is the first character of every line that is read over the TCP is not printed. Instead it is replace with a space. Why does this happen?
Example:
Expected output
Copyright (c) ACME Corporation
2009 - 2010
Actual Output
opyright (c) ACME Corporation
009 - 2010
Thanks...
The telnet protocol has a little bit of negotiation at its startup, something I sometimes jokingly refer to as a “secret handshake”. You should use a more straight-through service/port to get up to speed with sockets.
Also, you really need two different threads of control for this sort of thing; otherwise it’s too hard. Here’s a simple telnetish program from 1998:
use strict;
use IO::Socket;
my ($host, $port, $kidpid, $handle, $line);
unless (#ARGV == 2) { die "usage: $0 host port" }
($host, $port) = #ARGV;
# create a tcp connection to the specified host and port
$handle = IO::Socket::INET->new(Proto => "tcp",
PeerAddr => $host,
PeerPort => $port)
or die "can't connect to port $port on $host: $!";
$handle->autoflush(1); # so output gets there right away
print STDERR "[Connected to $host:$port]\n";
# split the program into two processes, identical twins
die "can't fork: $!" unless defined($kidpid = fork());
if ($kidpid) {
# parent copies the socket to standard output
while (defined ($line = <$handle>)) {
print STDOUT $line;
}
kill("TERM" => $kidpid); # send SIGTERM to child
}
else {
# child copies standard input to the socket
while (defined ($line = <STDIN>)) {
print $handle $line;
}
}
exit;
And here’s a more complete implementation, a program that sits on your firewall and waits for internal connections to some outside port:
#!/usr/bin/perl -w
# fwdport -- act as proxy forwarder for dedicated services
use strict; # require declarations
use Getopt::Long; # for option processing
use Net::hostent; # by-name interface for host info
use IO::Socket; # for creating server and client sockets
use POSIX ":sys_wait_h"; # for reaping our dead children
my (
%Children, # hash of outstanding child processes
$REMOTE, # whom we connect to on the outside
$LOCAL, # where we listen to on the inside
$SERVICE, # our service name or port number
$proxy_server, # the socket we accept() from
$ME, # basename of this program
);
($ME = $0) =~ s,.*/,,; # retain just basename of script name
check_args(); # processing switches
start_proxy(); # launch our own server
service_clients(); # wait for incoming
die "NOT REACHED"; # you can't get here from there
# process command line switches using the extended
# version of the getopts library.
sub check_args {
GetOptions(
"remote=s" => \$REMOTE,
"local=s" => \$LOCAL,
"service=s" => \$SERVICE,
) or die <<EOUSAGE;
usage: $0 [ --remote host ] [ --local interface ] [ --service service ]
EOUSAGE
die "Need remote" unless $REMOTE;
die "Need local or service" unless $LOCAL || $SERVICE;
}
# begin our server
sub start_proxy {
my #proxy_server_config = (
Proto => 'tcp',
Reuse => 1,
Listen => SOMAXCONN,
);
push #proxy_server_config, LocalPort => $SERVICE if $SERVICE;
push #proxy_server_config, LocalAddr => $LOCAL if $LOCAL;
$proxy_server = IO::Socket::INET->new(#proxy_server_config)
or die "can't create proxy server: $#";
print "[Proxy server on ", ($LOCAL || $SERVICE), " initialized.]\n";
}
sub service_clients {
my (
$local_client, # someone internal wanting out
$lc_info, # local client's name/port information
$remote_server, # the socket for escaping out
#rs_config, # temp array for remote socket options
$rs_info, # remote server's name/port information
$kidpid, # spawned child for each connection
);
$SIG{CHLD} = \&REAPER; # harvest the moribund
accepting();
# an accepted connection here means someone inside wants out
while ($local_client = $proxy_server->accept()) {
$lc_info = peerinfo($local_client);
set_state("servicing local $lc_info");
printf "[Connect from $lc_info]\n";
#rs_config = (
Proto => 'tcp',
PeerAddr => $REMOTE,
);
push(#rs_config, PeerPort => $SERVICE) if $SERVICE;
print "[Connecting to $REMOTE...";
set_state("connecting to $REMOTE"); # see below
$remote_server = IO::Socket::INET->new(#rs_config)
or die "remote server: $#";
print "done]\n";
$rs_info = peerinfo($remote_server);
set_state("connected to $rs_info");
$kidpid = fork();
die "Cannot fork" unless defined $kidpid;
if ($kidpid) {
$Children{$kidpid} = time(); # remember his start time
close $remote_server; # no use to master
close $local_client; # likewise
next; # go get another client
}
# at this point, we are the forked child process dedicated
# to the incoming client. but we want a twin to make i/o
# easier.
close $proxy_server; # no use to slave
$kidpid = fork();
die "Cannot fork" unless defined $kidpid;
# now each twin sits around and ferries lines of data.
# see how simple the algorithm is when you can have
# multiple threads of control?
# this is the fork's parent, the master's child
if ($kidpid) {
set_state("$rs_info --> $lc_info");
select($local_client); $| = 1;
print while <$remote_server>;
kill('TERM', $kidpid); # kill my twin cause we're done
}
# this is the fork's child, the master's grandchild
else {
set_state("$rs_info <-- $lc_info");
select($remote_server); $| = 1;
print while <$local_client>;
kill('TERM', getppid()); # kill my twin cause we're done
}
exit; # whoever's still alive bites it
} continue {
accepting();
}
}
# helper function to produce a nice string in the form HOST:PORT
sub peerinfo {
my $sock = shift;
my $hostinfo = gethostbyaddr($sock->peeraddr);
return sprintf("%s:%s",
$hostinfo->name || $sock->peerhost,
$sock->peerport);
}
# reset our $0, which on some systems make "ps" report
# something interesting: the string we set $0 to!
sub set_state { $0 = "$ME [#_]" }
# helper function to call set_state
sub accepting {
set_state("accepting proxy for " . ($REMOTE || $SERVICE));
}
# somebody just died. keep harvesting the dead until
# we run out of them. check how long they ran.
sub REAPER {
my $child;
my $start;
while (($child = waitpid(-1,WNOHANG)) > 0) {
if ($start = $Children{$child}) {
my $runtime = time() - $start;
printf "Child $child ran %dm%ss\n",
$runtime / 60, $runtime % 60;
delete $Children{$child};
} else {
print "Bizarre kid $child exited $?\n";
}
}
# If I had to choose between System V and 4.2, I'd resign. --Peter Honeyman
$SIG{CHLD} = \&REAPER;
};
As I said, that’s from 1998. These days I’d use warnings and possibly use autodie, but you still should be able to learn a good bit from it.