Can't read from socket in perl - possible deadlock? - perl

My OS is Archlinux with perl 5.14.2. I am just trying to write a little program to accomplish a remote comlile. The program just passes a C source file to the server. The server will call gcc to compile the C code and pass the compiler's message. The client can't receive the compiler's message. I have the message in the server.
There is the code:
#!/usr/bin/perl -w
# oj.pl --- alpha
use warnings;
use strict;
use IO::File;
use IO::Socket;
use constant MY_TRAN_PORT => 138000;
$| = 1;
my $tmpFileToBeCompiled = IO::File->new ("> tmpFile09090989.c") or die "Can't creat this file";
#if (defined $tmpFileToBeCompiled) {
# print $tmpFileToBeCompiled "argh"; # just for test!
#}
# $fihi->close;
my $port = shift || MY_TRAN_PORT;
my $sock_server = IO::Socket::INET->new (Listen => 20,
LocalPort => $port,
Timeout => 60,
Reuse => 1)
or die "Can't create listening socket: $!\n";
my $tmp = 1;
while ($tmp) {
next unless my $session = $sock_server->accept;
my $peer = gethostbyaddr ($session->peeraddr, AF_INET)
|| $session->peerhost;
warn "Connection from [$peer, $port]\n";
while (<$session>) {
print $tmpFileToBeCompiled $_; # if it works, the filehandle should be changed into tmpFile. just fixed.
print $session "test!";
}
my #lines = `gcc tmpFile09090989.c 2>&1`;
foreach ( #lines) {
print $session $_ . "test!!!\n";
# $session->print;
}
print "OK!";
$tmpFileToBeCompiled->close;
warn "Connecting finished!\n";
$session->close;
$tmp --;
}
$sock_server->close;
----------------------------------------end--------------------------------------------------------
-------------------------------------client.pl--------------------------------------------------------
use warnings;
use strict;
use IO::Socket qw(:DEFAULT);
use File::Copy;
use constant MY_TRAN_PORT => 138000;
use IO::File;
my $host = shift || '127.0.0.1';
my $port = shift || MY_TRAN_PORT;
my $socket = IO::Socket::INET->new("$host:$port") or die $#;
my $fh = IO::File->new("a.c", "r");
my $child = fork();
die "Can't fork: $!\n" unless defined $child;
# if (!$child) {
# $SIG{CHLD} = sub { exit 0 };
# userToHost();
# print "Run userToHost done!\n";
# $socket->shutdown(1);
# sleep;
# } else {
# hostToUser();
# print "Run hostToUser done! \n";
# warn "Connection closed by foreign host\n";
# }
userToHost();
unless ($child) {
hostToUser();
print "Run hostToUser done! \n";
warn "Connection closed by foreign host\n";
$socket->close;
}
sub userToHost {
while (<$fh>) {
# print $_; # for debug
print $socket $_;
}
}
sub hostToUser {
while (<$socket >) {
print $_;
}
}
# copy ("a.c", $socket) or die "Copy failed: $!";
print "Done!";

You don't need to fork in client. At all. Just like themel said
You have error in client code: <$socket > should be <$socket>
You need to notify server that you have written all data and server can start compilation. Otherwise server will stuck at while (<$session>) forever.
To achieve this you could call shutdown($socket, 1) which means you finished writing. See perldoc -f shutdown
Final prototype (very rough) could look like this: https://gist.github.com/19b589b8fc8072e3cfff

yko nailed it, but let me just suggest that your task will be solved in a much easier and more maintainable way by a shell script running from inetd.

Related

Reading STDOUT and STDERR of external command with no wait

I would like to execute external command rtmpdump and read it's STDOUT and STDERR separately, but not to wait till such command ends, but read its partial outputs in bulks, when available...
What is a safe way to do it in Perl?
This is a code I have that works "per-line" basis:
#!/usr/bin/perl
use warnings;
use strict;
use Symbol;
use IPC::Open3;
use IO::Select;
sub execute {
my($cmd) = #_;
print "[COMMAND]: $cmd\n";
my $pid = open3(my $in, my $out, my $err = gensym(), $cmd);
print "[PID]: $pid\n";
my $sel = new IO::Select;
$sel->add($out, $err);
while(my #fhs = $sel->can_read) {
foreach my $fh (#fhs) {
my $line = <$fh>;
unless(defined $line) {
$sel->remove($fh);
next;
}
if($fh == $out) {
print "[OUTPUT]: $line";
} elsif($fh == $err) {
print "[ERROR] : $line";
} else {
die "[ERROR]: This should never execute!";
}
}
}
waitpid($pid, 0);
}
But the above code works in text mode only, I believe. To use rtmpdump as a command, I need to collect partial outputs in binary mode, so do not read STDOUT line-by-line as it is in the above code.
Binary output of STDOUT should be stored in variable, not printed.
Using blocking functions (e.g. readline aka <>, read, etc) inside a select loop defies the use of select.
$sel->add($out, $err);
my %bufs;
while ($sel->count) {
for my $fh ($sel->can_read) {
my $rv = sysread($fh, $bufs{$fh}, 128*1024, length($bufs{$fh}));
if (!defined($rv)) {
# Error
die $! ;
}
if (!$rv) {
# Eof
$sel->remove($fh);
next;
}
if ($fh == $err) {
while ($bufs{$err} =~ s/^(.*\n)//) {
print "[ERROR] $1";
}
}
}
}
print "[ERROR] $bufs{$err}\n" if length($bufs{$err});
waitpid($pid, 0);
... do something with $bufs{$out} ...
But it would be much simpler to use IPC::Run.
use IPC::Run qw( run );
my ($out_buf, $err_buf);
run [ 'sh', '-c', $cmd ],
'>', \$out_buf,
'2>', sub {
$err_buf .= $_[0];
while ($err_buf =~ s/^(.*\n)//) {
print "[ERROR] $1";
}
};
print "[ERROR] $err_buf\n" if length($err_buf);
... do something with $out_buf ...
If you're on a POSIX system, try using Expect.pm. This is exactly the sort of problem it is designed to solve, and it also simplifies the task of sending keystrokes to the spawned process.

TCP Server multiple receive and respond

Im trying to emulate a TCP Server on the same PC where the app is running.
I dont know if it can be done in Perl because im not very experienced.
With the code bellow the first reply is working but i dont know how to implement the second.
#!/usr/bin/perl -w
use IO::Socket::INET;
use strict;
my $socket = IO::Socket::INET->new('LocalPort' => '3000',
'Proto' => 'tcp',
'Listen' => SOMAXCONN)
or die "Can't create socket ($!)\n";
print "Server listening\n";
while (my $client = $socket->accept) {
my $name = gethostbyaddr($client->peeraddr, AF_INET);
my $port = $client->peerport;
while (<$client>) {
print "$_";
print $client "RESPONSE1";
}
close $client
or die "Can't close ($!)\n";
}
die "Can't accept socket ($!)\n";
EDIT: Thank you guys for the imput, i ended up with php done it and its working, yay!
Use Net::Server for the connection, and a variable in the sub to keep the current state ($state in this code); something like this:
package MyServer;
use base qw/Net::Server/;
use strict;
use warnings;
sub process_request {
my $self = shift;
my $state = 0;
while (<STDIN>) {
s/\r?\n$//; # like chomp but for crlf too
if ($state == 0 and $_ eq 'data1') {
print "> okay1\n";
$state++;
} elsif ($state == 1 and $_ eq 'data2') {
print "> okay2\n";
$state++;
} else {
last if $state == 2;
$state = 0;
}
}
}
my $port = shift || 3000;
MyServer->run( port => $port );
The example in the Net::Server POD suggests using an alarm to timeout connections, which might be appropriate here. The code above does the following:
$ nc localhost 3000
data1
> okay1
data2
> okay2
data3
$
And if you need to move to a forking / preforking / non-blocking / co-routine driven system, there's a Net::Server personality for that.
"ready to go" code:
package MyServer;
use base qw/Net::Server/;
use strict;
use warnings;
sub process_request {
my $self = shift;
my $state = 0;
$| = 1;
binmode *STDIN;
while (read(*STDIN, local $_, 3 )) {
if ($state == 0 and $_ eq "\x{de}\x{c0}\x{ad}") {
print "\x{c4}\x{1a}\x{20}\x{de}";
$state++;
} elsif ($state == 1 and $_ eq "\x{18}\x{c0}\x{0a}") {
print "\x{11}\x{01}\x{73}\x{93}";
$state++;
last;
}
}
}
my $port = shift || 3000;
MyServer->run( port => $port );
It seems to me process_request sub doesn't work correctly when a low port is set (in my situation, port 23). In particular only with low port, while parsing data input, the first request contains additional chars (but it's all ok with subsequent requests).
Have you a tips? Thank you

How to multithread seeing if a webpage exists in Perl?

I'm writing a Perl script that takes in a list of URLs and checks to see if they exist. (Note that I only care if they exist; I don’t care what their contents are. Here’s the important part of the program.
use LWP::Simple qw($ua head);
if (head($url))
{
$numberAlive ++;
}
else
{
$numberDead ++;
}
Right now the program works fine; however, I want it to run faster. Thus I'm considering making it multithreaded. I assume that the slow part of my program is contacting the server for each URL; therefore, I'm looking for a way in which I can send out requests to the URLs of other webpages on my list while I'm waiting for the first response. How can I do this? As far as I can tell, the head routine doesn't have a callback that can get called once the server has responded.
Begin with familiar-looking front matter.
#! /usr/bin/env perl
use strict;
use warnings;
use 5.10.0; # for // (defined-or)
use IO::Handle;
use IO::Select;
use LWP::Simple;
use POSIX qw/ :sys_wait_h /;
use Socket;
Global constants control program execution.
my $DEBUG = 0;
my $EXIT_COMMAND = "<EXIT>";
my $NJOBS = 10;
URLs to check arrive one per line on a worker’s end of the socket. For each URL, the worker calls LWP::Simple::head to determine whether the resource is fetchable. The worker then writes back to the socket a line of the form url : *status* where *status* is either "YES" or "NO" and represents the space character.
If the URL is $EXIT_COMMAND, then the worker exits immediately.
sub check_sites {
my($s) = #_;
warn "$0: [$$]: waiting for URL" if $DEBUG;
while (<$s>) {
chomp;
warn "$0: [$$]: got '$_'" if $DEBUG;
exit 0 if $_ eq $EXIT_COMMAND;
print $s "$_: ", (head($_) ? "YES" : "NO"), "\n";
}
die "NOTREACHED";
}
To create a worker, we start by creating a socketpair. The parent process will use one end and each worker (child) will use the other. We disable buffering at both ends and add the parent end to our IO::Select instance. We also note each child’s process ID so we can wait for all workers to finish.
sub create_worker {
my($sel,$kidpid) = #_;
socketpair my $parent, my $kid, AF_UNIX, SOCK_STREAM, PF_UNSPEC
or die "$0: socketpair: $!";
$_->autoflush(1) for $parent, $kid;
my $pid = fork // die "$0: fork: $!";
if ($pid) {
++$kidpid->{$pid};
close $kid or die "$0: close: $!";
$sel->add($parent);
}
else {
close $parent or die "$0: close: $!";
check_sites $kid;
die "NOTREACHED";
}
}
To dispatch URLs, the parent grabs as many readers as are available and hands out the same number of URLs from the job queue. Any workers that remain after the job queue is empty receive the exit command.
Note that print will fail if the underlying worker has already exited. The parent must ignore SIGPIPE to prevent immediate termination.
sub dispatch_jobs {
my($sel,$jobs) = #_;
foreach my $s ($sel->can_write) {
my $url = #$jobs ? shift #$jobs : $EXIT_COMMAND;
warn "$0 [$$]: sending '$url' to fd ", fileno $s if $DEBUG;
print $s $url, "\n" or $sel->remove($s);
}
}
By the time control reaches read_results, the workers have been created and received work. Now the parent uses can_read to wait for results to arrive from one or more workers. A defined result is an answer from the current worker, and an undefined result means the child has exited and closed the other end of the socket.
sub read_results {
my($sel,$results) = #_;
warn "$0 [$$]: waiting for readers" if $DEBUG;
foreach my $s ($sel->can_read) {
warn "$0: [$$]: reading from fd ", fileno $s if $DEBUG;
if (defined(my $result = <$s>)) {
chomp $result;
push #$results, $result;
warn "$0 [$$]: got '$result' from fd ", fileno $s if $DEBUG;
}
else {
warn "$0 [$$]: eof from fd ", fileno $s if $DEBUG;
$sel->remove($s);
}
}
}
The parent must keep track of live workers in order to collect all results.
sub reap_workers {
my($kidpid) = #_;
while ((my $pid = waitpid -1, WNOHANG) > 0) {
warn "$0: [$$]: reaped $pid" if $DEBUG;
delete $kidpid->{$pid};
}
}
Running the pool executes the subs above to dispatch all URLs and return all results.
sub run_pool {
my($n,#jobs) = #_;
my $sel = IO::Select->new;
my %kidpid;
my #results;
create_worker $sel, \%kidpid for 1 .. $n;
local $SIG{PIPE} = "IGNORE"; # writes to dead workers will fail
while (#jobs || keys %kidpid || $sel->handles) {
dispatch_jobs $sel, \#jobs;
read_results $sel, \#results;
reap_workers \%kidpid;
}
warn "$0 [$$]: returning #results" if $DEBUG;
#results;
}
Using an example main program
my #jobs = qw(
bogus
http://stackoverflow.com/
http://www.google.com/
http://www.yahoo.com/
);
my #results = run_pool $NJOBS, #jobs;
print $_, "\n" for #results;
the output is
bogus: NO
http://www.google.com/: YES
http://stackoverflow.com/: YES
http://www.yahoo.com/: YES
Another option is HTTP::Async.
#!/usr/bin/perl
use strict;
use warnings;
use HTTP::Request;
use HTTP::Async;
my $numberAlive = 0;
my $numberDead = 0;
my #urls = ('http://www.perl.com','http://www.example.xyzzy/foo.html');
my $async = HTTP::Async->new;
# you might want to wrap this in a loop to deal with #urls in batches
foreach my $url (#urls){
$async->add( HTTP::Request->new( HEAD => $url ) );
}
while ( my $response = $async->wait_for_next_response ) {
if ($response->code == 200){$numberAlive ++;}
else{$numberDead ++;}
}
print "$numberAlive Alive, $numberDead Dead\n";
Worker-based parallelisation (using your choice of threads or processes):
use strict;
use warnings;
use feature qw( say );
use threads; # or: use forks;
use LWP::Simple qw( head );
use Thread::Queue::Any qw( );
use constant NUM_WORKERS => 10; # Or whatever.
my $req_q = Thread::Queue::Any->new();
my $resp_q = Thread::Queue::Any->new();
my #workers;
for (1..NUM_WORKERS) {
push #workers, async {
while (my $url = $req_q->dequeue()) {
my $is_alive = head($url) ? 1 : 0;
$resp_q->enqueue($is_alive);
}
};
}
$req_q->enqueue($_) for #urls;
my ($alive, $dead);
for (1..#urls) {
my $is_alive = $resp_q->dequeue();
++( $is_alive ? $alive : $dead );
}
$req_q->enqueue(undef) for #workers;
$_->join for #workers;
say $alive;
say $dead;

Reading from Perl pipe constantly outputting text

I recently tried to make a game server controller in Perl, I would like to start, stop and view the text that has been outputted by the game server, this is what I have so far:
#!/usr/bin/perl -w
use IO::Socket;
use Net::hostent; # for OO version of gethostbyaddr
$PORT = 9050; # pick something not in use
$server = IO::Socket::INET->new( Proto => 'tcp',
LocalPort => $PORT,
Listen => SOMAXCONN,
Reuse => 1);
die "can't setup server" unless $server;
print "[Server $0 accepting clients]\n";
while ($client = $server->accept()) {
$client->autoflush(1);
print $client "Welcome to $0; type help for command list.\n";
$hostinfo = gethostbyaddr($client->peeraddr);
printf "[Connect from %s]\n", $hostinfo->name || $client->peerhost;
print $client "Command? ";
while ( <$client>) {
next unless /\S/; # blank line
if (/quit|exit/i) {
last; }
elsif (/some|thing/i) {
printf $client "%s\n", scalar localtime; }
elsif (/start/i ) {
open RSPS, '|java -jar JARFILE.jar' or die "ERROR STARTING: $!\n";
print $client "I think it started...\n Say status for output\n"; }
elsif (/stop/i ) {
print RSPS "stop";
close(RSPS);
print $client "Should be closed.\n"; }
elsif (/status/i ) {
$output = <RSPS>;
print $client $output; }
else {
print $client "Hmmmm\n";
}
} continue {
print $client "Command? ";
}
close $client;
}
I am having trouble reading from the pipe, any ideas?
Thanks!
You are trying to do both reading and writing on the RSPS filehandle, though you have only opened it for writing (open RSPS, '|java -jar JARFILE.jar' means start the java process and use the RSPS filehandle to write to the standard input of the java process).
To read the output of the process, you will either need to write the process output to a file and open a separate filehandle to that file
open RSPS, '| java -jar JARFILE.jar > jarfile.out';
open PROC_OUTPUT, '<', 'jarfile.out';
or check out a module like IPC::Open3, which was made for applications like this.
use IPC::Open3;
# write to RSPS and read from PROC_OUTPUT and PROC_ERROR
open3(\*RSPS, \*PROC_OUTPUT, \*PROC_ERROR,
'java -jar JARFILE.jar');

How To Avoid a Perl script calling an Another Perl Script

i am calling a perl script client.pl from a main script to capture the output of client.pl
in #output.
is there anyway to avoid the use of these two files so i can use the output of client.pl in main.pl itself
here is my code....
main.pl
=======
my #output = readpipe("client.pl");
client.pl
=========
#! /usr/bin/perl -w
#use strict;
use Socket;
#initialize host and port
my $host = shift || $FTP_SERVER;
my $port = shift || $CLIENT_PORT;
my $proto = getprotobyname('tcp');
#get the port address
my $iaddr = inet_aton($host);
my $paddr = sockaddr_in($port, $iaddr);
#create the socket, connect to the port
socket(SOCKET, PF_INET, SOCK_STREAM, $proto)or die "socket: $!\n";
connect(SOCKET, $paddr) or die "connect: $!\n";
my $line;
while ($line = <SOCKET>)
{
print "$line\n";
}
close SOCKET or die "close: $!";
/rocky..
Put the common code in a package. Use the package in client.pl and main.pl. Chapter 10 of Programming Perl has more information.
Not sure what you are really trying to do, but might worh investigating a package such as Net::FTP ( http://search.cpan.org/perldoc?Net%3A%3AFTP )
you can do two things:
Merge the codes in client.pl and main.pl as your main function does no work other than printing. In case you want to do more from the incoming input data, you should do that in client.pl itself, coz an in-memory array(#output) may run out of RAM while reading large size data across the network.
If you want the output in an array (#output)
sub client {
# intialize ..
my #array = (); #empty array
while ($line = <SOCKET>)
{
push(#array,$line);
}
return #array;
}
#output = client();
print #output;
Other way, you can also use references:
sub client {
# intialize ..
my #array = (); #empty array
while ($line = <SOCKET>)
{
push(#array,$line);
}
return #array;
}
my $output_ref = client();
print #$output_ref; // dereference and print.