TCP Server multiple receive and respond - perl

Im trying to emulate a TCP Server on the same PC where the app is running.
I dont know if it can be done in Perl because im not very experienced.
With the code bellow the first reply is working but i dont know how to implement the second.
#!/usr/bin/perl -w
use IO::Socket::INET;
use strict;
my $socket = IO::Socket::INET->new('LocalPort' => '3000',
'Proto' => 'tcp',
'Listen' => SOMAXCONN)
or die "Can't create socket ($!)\n";
print "Server listening\n";
while (my $client = $socket->accept) {
my $name = gethostbyaddr($client->peeraddr, AF_INET);
my $port = $client->peerport;
while (<$client>) {
print "$_";
print $client "RESPONSE1";
}
close $client
or die "Can't close ($!)\n";
}
die "Can't accept socket ($!)\n";
EDIT: Thank you guys for the imput, i ended up with php done it and its working, yay!

Use Net::Server for the connection, and a variable in the sub to keep the current state ($state in this code); something like this:
package MyServer;
use base qw/Net::Server/;
use strict;
use warnings;
sub process_request {
my $self = shift;
my $state = 0;
while (<STDIN>) {
s/\r?\n$//; # like chomp but for crlf too
if ($state == 0 and $_ eq 'data1') {
print "> okay1\n";
$state++;
} elsif ($state == 1 and $_ eq 'data2') {
print "> okay2\n";
$state++;
} else {
last if $state == 2;
$state = 0;
}
}
}
my $port = shift || 3000;
MyServer->run( port => $port );
The example in the Net::Server POD suggests using an alarm to timeout connections, which might be appropriate here. The code above does the following:
$ nc localhost 3000
data1
> okay1
data2
> okay2
data3
$
And if you need to move to a forking / preforking / non-blocking / co-routine driven system, there's a Net::Server personality for that.

"ready to go" code:
package MyServer;
use base qw/Net::Server/;
use strict;
use warnings;
sub process_request {
my $self = shift;
my $state = 0;
$| = 1;
binmode *STDIN;
while (read(*STDIN, local $_, 3 )) {
if ($state == 0 and $_ eq "\x{de}\x{c0}\x{ad}") {
print "\x{c4}\x{1a}\x{20}\x{de}";
$state++;
} elsif ($state == 1 and $_ eq "\x{18}\x{c0}\x{0a}") {
print "\x{11}\x{01}\x{73}\x{93}";
$state++;
last;
}
}
}
my $port = shift || 3000;
MyServer->run( port => $port );

It seems to me process_request sub doesn't work correctly when a low port is set (in my situation, port 23). In particular only with low port, while parsing data input, the first request contains additional chars (but it's all ok with subsequent requests).
Have you a tips? Thank you

Related

How to get methods from HTTP::Daemon

How can I find out the $code and $mess in HTTP::Daemon module? In cpan the usage is as
$c->send_status_line( $code, $mess, $proto )
but I dont know where/how to get $code, $mess from.
Like, send_error($code) is used as send_error(RC_FORBIDDEN) which I found from someone's code online, where did he get RC_FORBIDDEN from?
Have been playing with the following code. Sorry for the formatting and many thanks to #choroba for formatting it for me.
use warnings;
use strict;
use HTTP::Daemon;
use HTTP::Status;
use LWP;
my $daemon = HTTP::Daemon->new or die;
my $d = HTTP::Daemon->new(
LocalAddr => '0.0.0.0',
LocalPort => '5000',
);
printf ("\n\n URL of webserver is %s, show this script with %stest\n",
$d->url, $d->url);
while (my $client_connection = $d->accept)
{
new_connection($client_connection);
}
sub new_connection
{
my $client_connection = shift;
printf "new connection\n";
while (my $request = $client_connection->get_request)
{
if (my $pid = fork)
{
print "Child created : $pid\n";
}
elsif (!defined $pid)
{
die "Cannot fork $!\n";
}
else
{
my $address_of_client = $client_connection->peerhost();
my $port_of_client = $client_connection->peerport();
print "Connection from client $address_of_client on port
$port_of_client\n";
print " request\n";
if ($request->method eq 'GET' and $request->uri->path
eq "/test")
{
$client_connection->send_file_response(RC_OK);
#$client_connection->send_status_line(200);
#print "OK ";
#$client_connection->send_file_response($0);
}
else
{
$client_connection->send_error(RC_NOT_FOUND);
}
}
$client_connection->close;
}
}
The documentation also states
If $code is omitted 200 is assumed. If $mess is omitted, then a message corresponding to $code is inserted. If $proto is missing the content of the $HTTP::Daemon::PROTO variable is used.
So, you don't have to specify the arguments at all. Otherwise, just use any of the possible HTTP status codes for $code, and either don't specify the $mess to get the default message for the code, or use any message you like.
RC_FORBIDEN is exported from HTTP::Status.

Better way to handle perl sockets to read/write to active proccess

First of all I would thank you guys not offering a work around as a solution (although it would be cool to know other ways to do it). I was setting up tg-master project (telegram for cli) to be used by check_mk alert plugin. I found out that telegram runs on a stdin/stdout proccess so I tought it would be cool to "glue" it, so i wrote with a lot of building blocks from blogs and cpan the next 2 pieces of code. They already work (i need to handle broken pipes sometimes) but I was wondering if sharing this could come from some experts new ideas.
As you could see my code relies on a eval with a die reading from spawned process, and I know is not the best way to do it. Any suggestions? :D
Thank you guys
Server
use strict;
use IO::Socket::INET;
use IPC::Open2;
use POSIX;
our $pid;
use sigtrap qw/handler signal_handler normal-signals/;
sub signal_handler {
print "what a signal $!\nlets kill $pid\n";
kill 'SIGKILL', $pid;
#die "Caught a signal $!";
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET(
LocalHost => '0.0.0.0',
LocalPort => '7777',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port 7777\n";
my ( $read_proc, $write_proc );
my ( $uid, $gid ) = ( getpwnam "nagios" )[ 2, 3 ];
POSIX::setgid($gid); # GID must be set before UID!
POSIX::setuid($uid);
$pid = open2( $read_proc, $write_proc, '/usr/bin/telegram' );
#flush first messages;
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) { }
};
while (1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read until \n
my $data = "";
$data = $client_socket->getline();
# write to spawned process stdin the line we got on $data
print $write_proc $data;
$data = "";
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) {
$client_socket->send($_);
}
};
# notify client that response has been sent
shutdown( $client_socket, 1 );
}
$socket->close();
Client
echo "contact_list" | nc localhost 7777
or
echo "msg user#12345 NAGIOS ALERT ... etc" | nc localhost 7777
or
some other perl script =)
If you are going to implement a script that performs both reads and writes from/to different handles, consider using select (the one defined as select RBITS,WBITS,EBITS,TIMEOUT in the documentation). In this case you will totally avoid using alarm with a signal handler in eval to handle a timeout, and will only have one loop with all of the work happening inside it.
Here is an example of a program that reads from both a process opened with open2 and a network socket, not using alarm at all:
use strict;
use warnings;
use IO::Socket;
use IPC::Open2;
use constant MAXLENGTH => 1024;
my $socket = IO::Socket::INET->new(
Listen => SOMAXCONN,
LocalHost => '0.0.0.0',
LocalPort => 7777,
Reuse => 1,
);
# accepting just one connection
print "waiting for connection...\n";
my $remote = $socket->accept();
print "remote client connected\n";
# simple example of the program writing something
my $pid = open2(my $localread, my $localwrite, "sh -c 'while : ; do echo boom; sleep 1 ; done'");
for ( ; ; ) {
# cleanup vectors for select
my $rin = '';
my $win = '';
my $ein = '';
# will wait for a possibility to read from these two descriptors
vec($rin, fileno($localread), 1) = 1;
vec($rin, fileno($remote), 1) = 1;
# now wait
select($rin, $win, $ein, undef);
# check which one is ready. read with sysread, not <>, as select doc warns
if (vec($rin, fileno($localread), 1)) {
print "read from local process: ";
sysread($localread, my $data, MAXLENGTH);
print $data;
}
if (vec($rin, fileno($remote), 1)) {
print "read from remote client: ";
sysread($remote, my $data, MAXLENGTH);
print $data;
}
}
In the real production code you will need to carefully check for errors returned by various function (socket creation, open2, accept, and select).

What is wrong with this IO::Socket::UNIX example?

I am trying to implement a simple echo client/server, over a Unix socket. (My ultimate goal is to exchange JSON data, but this example is for simplicity). I have no idea why the client process disappears into a black hole when it tries to print to the socket the second time.
server.pl :
use IO::Socket::UNIX;
my $socket_path = '/tmp/mysocket';
unlink $socket_path if -e $socket_path;
my $socket = IO::Socket::UNIX->new(
Local => $socket_path,
Type => SOCK_STREAM,
Listen => SOMAXCONN,
);
die "Can't create socket: $!" unless $socket;
while (1) {
next unless my $connection = $socket->accept;
chomp( my $line = <$connection> );
print $connection "$line\n";
}
client.pl :
use IO::Socket::UNIX;
my $socket = IO::Socket::UNIX->new(
Type => SOCK_STREAM,
Peer => '/tmp/mysocket',
);
die "Can't create socket: $!" unless $socket;
my $line;
print $socket "one\n";
chomp( $line = <$socket> );
say $line;
print $socket "two\n";
chomp( $line = <$socket> );
say $line;
say "three";
Expected output:
> ./client.pl
> one
> two
> three
Actual output:
> ./client.pl
> one
You put the $socket->accept call inside your while loop. After your server establishes a connection and receives some input from the client, the next thing it wants to do is establish a new connection.
Move the accept call outside the while loop
my $connection = $socket->accept;
$connection->autoflush(1);
while (my $line = <$connection> ) {
chomp($line);
print $connection "$line\n";
}
or, if you do want to accept more than one connection,
while (1) {
next unless my $connection = $socket->accept;
$connection->autoflush(1);
while (my $line = <$connection>) {
chomp($line);
print $connection "$line\n";
}
}
Your current solution will also likely be "suffering from buffering", so both the server and the client should set autoflush(1) on their socket handlers.
Now to handle simultaneous connections, the server would usually call fork after getting a connection, and handling that connection in a child process.
while (1) {
my $connection = $socket->accept;
if (fork() == 0) {
$connection->autoflush(1);
while (my $line = <$connection>) {
chomp($line);
print $connection "$line\n";
}
close $connection;
exit;
}
}
A complete bi-directional example for anyone looking:
Server:
#! /usr/bin/perl -w
use strict;
use IO::Socket::UNIX qw( SOCK_STREAM SOMAXCONN );
my $SOCK_PATH = '/tmp/test.sock';
unlink($SOCK_PATH) if -e $SOCK_PATH;
my $server = IO::Socket::UNIX->new(
Type => SOCK_STREAM(),
Local => $SOCK_PATH,
Listen => SOMAXCONN,
)
or die("Can't create server socket: $!\n");
while (1) {
my $connection = $server->accept;
if (fork() == 0) {
print "** New connection received **\n";
$connection->autoflush(1);
my $count = 1;
while (my $line = <$connection>) {
if ($line){
chomp($line);
$connection->print($count . ' -> ' . $line . "\n"); # Sent response back to client, \n terminates
print "Received and replied to $count '$line'\n";
$count++;
}
}
close $connection;
exit;
}
}
Client:
#!/usr/bin/perl -w
use strict;
use IO::Socket::UNIX qw( SOCK_STREAM );
my $SOCK_PATH = '/tmp/test.sock';
my $client = IO::Socket::UNIX->new(
Type => SOCK_STREAM(),
Peer => $SOCK_PATH
)
or die("Can't connect to server: $!\n");
$client->autoflush(1);
## Listen for replies
if (fork() == 0) {
while (my $line = <$client>) {
if ($line){
chomp($line);
print("Recv: '" . $line . "'\n");
}
}
}
## Send something
for my $itm ('Alpha','Beta','Gamma','Delta'){
print("Send: " . $itm . "\n");
print($client $itm . "\n") or warn("Can't send: $!\n"); # send to server, \n terminates
}
print "** Client Finished **\n";

Creating A Single Threaded Server with AnyEvent (Perl)

I'm working on creating a local service to listen on localhost and provide a basic call and response type interface. What I'd like to start with is a baby server that you can connect to over telnet and echoes what it receives.
I've heard AnyEvent is great for this, but the documentation for AnyEvent::Socket does not give a very good example how to do this. I'd like to build this with AnyEvent, AnyEvent::Socket and AnyEvent::Handle.
Right now the little server code looks like this:
#!/usr/bin/env perl
use AnyEvent;
use AnyEvent::Handle;
use AnyEvent::Socket;
my $cv = AnyEvent->condvar;
my $host = '127.0.0.1';
my $port = 44244;
tcp_server($host, $port, sub {
my($fh) = #_;
my $cv = AnyEvent->condvar;
my $handle;
$handle = AnyEvent::Handle->new(
fh => $fh,
poll => "r",
on_read => sub {
my($self) = #_;
print "Received: " . $self->rbuf . "\n";
$cv->send;
}
);
$cv->recv;
});
print "Listening on $host\n";
$cv->wait;
This doesn't work and also if I telnet to localhost:44244 I get this:
EV: error in callback (ignoring): AnyEvent::CondVar:
recursive blocking wait attempted at server.pl line 29.
I think if I understand how to make a small single threaded server that I can connect to over telnet and prints out whatever its given and then waits for more input, I could take it a lot further from there. Any ideas?
You're blocking inside a callback. That's not allowed. There are a few ways to handle this. My preference is to launch a Coro thread from within the tcp_server callback. But without Coro, something like this might be what you're looking for:
#!/usr/bin/env perl5.16.2
use AnyEvent;
use AnyEvent::Handle;
use AnyEvent::Socket;
my $cv = AE::cv;
my $host = '127.0.0.1';
my $port = 44244;
my %connections;
tcp_server(
$host, $port, sub {
my ($fh) = #_;
print "Connected...\n";
my $handle;
$handle = AnyEvent::Handle->new(
fh => $fh,
poll => 'r',
on_read => sub {
my ($self) = #_;
print "Received: " . $self->rbuf . "\n";
},
on_eof => sub {
my ($hdl) = #_;
$hdl->destroy();
},
);
$connections{$handle} = $handle; # keep it alive.
return;
});
print "Listening on $host\n";
$cv->recv;
Note that I'm only waiting on one condvar. And I'm storing the handles to keep the AnyEvent::Handle objects alive longer. Work to clean up the $self->rbuf is left as an excersise for the reader :-)
Question cross-posted, answer, too :-)
I have heard good things about AnyEvent as well, but have not used it. I wrote a small nonblocking server in the past using IO::Select. There is an example in the documentation for that module (I've added a few lines):
use IO::Select;
use IO::Socket;
$lsn = new IO::Socket::INET(Listen => 1, LocalPort => 8080);
$sel = new IO::Select( $lsn );
while(#ready = $sel->can_read) {
foreach $fh (#ready) {
if($fh == $lsn) {
# Create a new socket
$new = $lsn->accept;
$sel->add($new);
}
else {
# Process socket
my $input = <$fh>;
print $fh "Hello there. You said: $input\n";
# Maybe we have finished with the socket
$sel->remove($fh);
$fh->close;
}
}
}
I'm not sure what your condvar is trying to trigger there. Use it to send state, like:
#!/usr/bin/env perl
use AnyEvent;
use AnyEvent::Handle;
use AnyEvent::Socket;
my $host = '127.0.0.1';
my $port = 44244;
my $exit = AnyEvent->condvar;
tcp_server($host, $port, sub {
my($fh) = #_;
my $handle; $handle = AnyEvent::Handle->new(
fh => $fh,
poll => "r",
on_read => sub {
my($self) = #_;
print "Received: " . $self->rbuf . "\n";
if ($self->rbuf eq 'exit') {
$exit->send;
}
}
);
});
print "Listening on $host\n";
$exit->recv;

Can't read from socket in perl - possible deadlock?

My OS is Archlinux with perl 5.14.2. I am just trying to write a little program to accomplish a remote comlile. The program just passes a C source file to the server. The server will call gcc to compile the C code and pass the compiler's message. The client can't receive the compiler's message. I have the message in the server.
There is the code:
#!/usr/bin/perl -w
# oj.pl --- alpha
use warnings;
use strict;
use IO::File;
use IO::Socket;
use constant MY_TRAN_PORT => 138000;
$| = 1;
my $tmpFileToBeCompiled = IO::File->new ("> tmpFile09090989.c") or die "Can't creat this file";
#if (defined $tmpFileToBeCompiled) {
# print $tmpFileToBeCompiled "argh"; # just for test!
#}
# $fihi->close;
my $port = shift || MY_TRAN_PORT;
my $sock_server = IO::Socket::INET->new (Listen => 20,
LocalPort => $port,
Timeout => 60,
Reuse => 1)
or die "Can't create listening socket: $!\n";
my $tmp = 1;
while ($tmp) {
next unless my $session = $sock_server->accept;
my $peer = gethostbyaddr ($session->peeraddr, AF_INET)
|| $session->peerhost;
warn "Connection from [$peer, $port]\n";
while (<$session>) {
print $tmpFileToBeCompiled $_; # if it works, the filehandle should be changed into tmpFile. just fixed.
print $session "test!";
}
my #lines = `gcc tmpFile09090989.c 2>&1`;
foreach ( #lines) {
print $session $_ . "test!!!\n";
# $session->print;
}
print "OK!";
$tmpFileToBeCompiled->close;
warn "Connecting finished!\n";
$session->close;
$tmp --;
}
$sock_server->close;
----------------------------------------end--------------------------------------------------------
-------------------------------------client.pl--------------------------------------------------------
use warnings;
use strict;
use IO::Socket qw(:DEFAULT);
use File::Copy;
use constant MY_TRAN_PORT => 138000;
use IO::File;
my $host = shift || '127.0.0.1';
my $port = shift || MY_TRAN_PORT;
my $socket = IO::Socket::INET->new("$host:$port") or die $#;
my $fh = IO::File->new("a.c", "r");
my $child = fork();
die "Can't fork: $!\n" unless defined $child;
# if (!$child) {
# $SIG{CHLD} = sub { exit 0 };
# userToHost();
# print "Run userToHost done!\n";
# $socket->shutdown(1);
# sleep;
# } else {
# hostToUser();
# print "Run hostToUser done! \n";
# warn "Connection closed by foreign host\n";
# }
userToHost();
unless ($child) {
hostToUser();
print "Run hostToUser done! \n";
warn "Connection closed by foreign host\n";
$socket->close;
}
sub userToHost {
while (<$fh>) {
# print $_; # for debug
print $socket $_;
}
}
sub hostToUser {
while (<$socket >) {
print $_;
}
}
# copy ("a.c", $socket) or die "Copy failed: $!";
print "Done!";
You don't need to fork in client. At all. Just like themel said
You have error in client code: <$socket > should be <$socket>
You need to notify server that you have written all data and server can start compilation. Otherwise server will stuck at while (<$session>) forever.
To achieve this you could call shutdown($socket, 1) which means you finished writing. See perldoc -f shutdown
Final prototype (very rough) could look like this: https://gist.github.com/19b589b8fc8072e3cfff
yko nailed it, but let me just suggest that your task will be solved in a much easier and more maintainable way by a shell script running from inetd.