Proper use of IO::Socket::INET for TCP client in perl - perl

I have a question about how I should be using IO::Socket; I have a script that should run constantly, monitoring an Asterisk server for certain events. When these events happen, the script sends data from the event off to another server via a TCP socket. I've found that occasionally, the socket will close. My question is whether I should be using a single socket, and keep it open forever (and figure out why + prevent it from closing), or should I open and close a new socket for each bit of data sent out?
My experience with this sort of thing is very minimal, and I've read all the documentation without finding the answer I'm looking for. Below is a sample of what I've got so far:
#!/usr/bin/perl
use Asterisk::AMI;
use IO::Socket;
use strict;
use warnings;
my $sock = new IO::Socket::INET (
PeerAddr => '127.0.0.1',
PeerPort => '1234',
Proto => 'tcp',
);
sub newchannel {
my ($ami, $event) = #_;
if ($event->{'Context'} eq "from-trunk") {
my $unique_id = $event->{'Uniqueid'};
my $this_call = $call{$unique_id};
$this_call->{caller_name} = $event->{'CallerIDName'};
$this_call->{caller_number} = $event->{'CallerIDNum'};
$this_call->{dnis} = $event->{'Exten'};
$call{$unique_id} = $this_call;
};
}
sub ringcheck {
my ($ami, $event) = #_;
if ($event->{SubEvent} eq 'Begin') {
my $unique_id = $event->{UniqueID};
if (exists $call{$unique_id}) {
my $this_call = $call{$unique_id};
$this_call->{system_extension} = $event->{Dialstring};
$this_call->{dest_uniqueid} = $event->{DestUniqueID};
printf $sock "R|%s|%s|%s||%s\n",
$this_call->{caller_name},
$this_call->{caller_number},
$this_call->{system_extension},
$this_call->{dnis};
$this_call->{status} = "ringing";
}
}
There's a bit more to it than that, but this shows where I feel I should be starting/stopping a new socket (within the ringcheck sub).
Let me know if you need me to clarify or add anything.
Thanks!

Whether it is better establish a new connection for each message or to keep the connection open depends on a few factors:
Is the overhead associated with establishing connections significant? This depends on factors such as the frequency with which messages need to be sent, and the quality of the network connection.
If the remote end is 'localhost', as in your sample script above, then this is not likely to be an issue, and in fact in that case I would recommend using a Unix domain socket instead anyway.
Is the remote end sending anything back? Much harder to manage sporadic connections if either side may have asynchronous messages to send. Does not sound like it is the case for you though.
Are there any significant resources which you would be holding up by keeping the connection open?
Note that I don't consider random connection dropouts are a good reason to argue for making a new connection each time. If possible, better to diagnose that problem in any case. Otherwise, you might get unreliable performance no matter what approach you take.
In my experience a very common reason for seemingly random dropouts in long held TCP connections is intermediate tracking firewalls. Such firewalls will drop a connection if they don't see any activity on it for a period of time, to conserve their own resources. One way to combat this, which I use in some of my tools, is to set the socket option SO_KEEPALIVE on the socket, like this:
use Socket;
...
setsockopt($sock, SOL_SOCKET, SO_KEEPALIVE, 1);
This has a couple of benefits - it results in the Kernel sending keepalive messages on your connection at regular intervals, even if all is quiet, which by itself is enough to keep some firewalls happy. Also, if your connection does drop, your program can find out straight away instead of next time you want to write to it (although you may not notice it unless you are regularly checking for errors on your sockets).
Perhaps your best approach might be to set the SO_KEEPALIVE, and keep your socket open, but also check for errors whenever you try to write to it, and if you have an error, close and re-open the connection.
This question may also be of use to you.

Related

Perl IO::Socket::INET + IO::Async::Stream reconnect to TCP server when disconnected

for the life of me, i can't seem to figure out how to get a standard TCP socket connection to reconnect after a disconnect, particularly in the context of an IO::Async::Loop
some basics:
#!/usr/bin/perl
use strict;
use warnings;
use Socket;
use IO::Async::Loop;
use IO::Async::Stream;
use IO::Socket;
use Time::HiRes qw(usleep);
# standard event loop
my $loop = IO::Async::Loop->new;
# notification service socket connection; we only write outgoing
my $NOTIFY = IO::Socket::INET->new(
PeerHost => $a_local_network_host,
PeerPort => $comm_port,
Proto => 'tcp',
Type => SOCK_STREAM,
ReuseAddr => 1,
Blocking => 0
) or warn("Can't connect to NOTIFY: $!\n");
setsockopt($NOTIFY, SOL_SOCKET, SO_KEEPALIVE, 1);
# main interface element via IO::Async
my $notifystream = IO::Async::Stream->new(
handle => $NOTIFY,
on_read => sub {
my ( $self, $buffref, $eof ) = #_;
# here's where we need to handle $eof if the remote goes away
if($eof) {
# i have tried several things here
usleep(200000); # give the remote service some milliseconds to start back up
# process fails if remote is not back up, so i know the timeout is 'good enough' for this test
# attempt to reconnect. have also tried recreating from scratch
$NOTIFY->connect("$a_local_network_host:$comm_port");
# this doesn't seem to have any effect
$self->configure(handle=>$NOTIFY);
}
}
);
$loop->add( $notifystream );
# kickstart the event loop
$loop->run;
### -- Meanwhile, elsewhere in the code -- ###
$notifystream->write("data for notification service\n");
in reality, there are many more things going on in the loop. i also have more sophisticated ways to test for socket closed or broken, further error handlers on the $notifystream, and a better timeout/backoff for reconnecting to the remote service, however this should show the main crux of what i'm doing.
when the remote server goes away for any reason, i'd like to attempt to reconnect to it without disrupting the rest of the system. in most cases the remote sends eof cleanly because it's intentionally rebooting (not my choice, just something i have to deal with), but i'd also like to handle other communication errors as well.
in practice, the above code acts as though it works, however the remote service no longer receives further write calls to the $notifystream. no errors are generated, the $notifystream happily takes further writes, but they are not delivered to the remote.
i have a feeling i'm doing this wrong. i'm not looking to rewrite the rest of the application's event loop, so please no 'just use AnyEvent'-type responses -- really hoping to gain a better understanding of how to reconnect/reuse/recreate the variables in use here (IO::Socket::INET and IO::Async::Stream) to compensate when a remote server is temporarily unavailable.
Any suggestions or references towards this goal are welcome. Thanks!
-=-=-=-=-
to summarize errors i have (and have not) received:
if i leave no usleep, the reconnect (or recreation) of the base socket will fail due to the remote service being unavailable.
if i attempt to recreate the socket from scratch and then 'configure' the stream, i get 'can't call method sysread on undefined' which leads me to believe the socket is not recreated correctly.
at no time do the stream's built in 'on_read_error' or 'on_write_error' handlers fire, regardless of how much i write to the socket with the current code, although if i destroy the socket entirely these will generate an error.
the socket simply seems to still be active after i know it has closed, and a reconnect does not seem to change anything. no errors are generated, but the socket is not being written to.
is there different syntax for reconnecting to an IO::Socket::INET socket? so far calls to ->connect() or rebuilding from scratch seem to be the only options for a closed connection, and neither seem to work.
You simply cannot connect an existing socket multiple times. You can only close the socket and create a new socket. This has nothing to do with IO::Socket::INET, IO::Async::Stream or even Perl but this is how the sockets API works.
In detail: The local socket actually never got disconnected, i.e. it is still configured to send data from a specific local IP address and port and to a specific address and port. Only, that the sending will no longer worked because the underlying TCP connection is broken or closed (i.e. FIN exchanged). Since there is no API to unbind and unconnect a socket the only way is to close it and create a new one which is unbound and unconnected until connect is called. This new socket then might or might not get the same file descriptor as the previous one.

Sending Multiple Payloads Over Socket in Perl

Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!
I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).
What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.

How can I listen on multiple sockets in Perl?

I want to listen on different sockets on a TCP/IP client written in Perl. I know I
have to use select() but I don't know exactly how to implement it.
Can someone show me examples?
Use the IO::Select module. perldoc IO::Select includes an example.
Here's a client example. Not guarneteed to be typo free or even work right:
use IO::Select;
use IO::Socket;
# also look at IO::Handle, which IO::Select inherits from
$lsn1 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8000, Proto=>'tcp');
$lsn2 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8001, Proto=>'tcp');
$lsn3 = IO::Socket::INET->new(PeerAddr=>'example.org', PeerPort=>8002, Proto=>'tcp');
$sel = IO::Select->new;
$sel->add($lsn1);
$sel->add($lsn2);
# don't add the third socket to the select if you are never going to read form it.
while(#ready = $sel->can_read) {
foreach $fh (#ready) {
#read your data
my $line = $fh->getline();
# do something with $line
#print the results on a third socket
$lsn3->print("blahblahblah");
}
}
this was too big to put in a comment field
You need to better define what you want to do. You have stated that you need to read from port A and write to port B. This is what the above code does. It waits for data to come in on the sockets $lsn1 and $lsn2 (ports 8000 and 8001), reads a line, then writes something back out to example.com on port 8002 (socket $lsn3).
Note that select is really only necessary if you need to read from multiple sockets. If you strictly need to read from only one socket, then scrap the IO::Select object and the while loop and just do $line = < $lsn1 > . That will block until a line is received.
Anyway, by your definition, the above code is a client. The code does actively connect to the server (example.org in this case). I suggest you read up on how IO::Socket::INET works. The parameters control whether it's a listening socket or not.

IO::Socket timing out when getting response

I'm trying to connect to a web service using IO::Socket::INET (yes, I know that there are lots of better modules for doing this, but I don't have them and can't add them, so please don't suggest it), but I'm timing out (I think that's what it's doing) waiting for a response.
Here's the basic crux of my code (I previously populate the content with all the proper headers, and set it up, etc):
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line ( #lines ) {
print $line;
}
It appears that my request is made immediately, then it waits about 2 minutes before spitting back the response. If I alter the code to use a raw socket recv instead of getlines(), ala:
$httpSock->recv($data, 1024);
I get the response immediately (although only the first 1024 chars). Am I doing something wrong here? I'm using a late enough version of IO::Socket that autoflush should be enabled by default, but turning it on explicitly didn't seem to make any difference. I could probably also just keep reading from the socket until I got the entire response, but that's definitely messier than using getlines() or <$httpSock>.
Thanks in advance.
I'm having an issue re-creating the problem with the code snippet you've posted. Here's the code I tested with:
use strict;
use warnings;
use IO::Socket;
my $httpSock = new IO::Socket::INET(
PeerAddr => 'www.google.com',
PeerPort => '80',
Proto => 'tcp',
);
my $content = "HEAD / HTTP/1.0\r\nHost: www.google.com\r\n\r\n";
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line (#lines) {
print $line;
}
Here are the results:
$ time ./1.pl
HTTP/1.0 200 OK
-snip-
real 0m0.084s
user 0m0.025s
sys 0m0.007s
The problem is that getlines() waits until the connection is closed. If the web service you are connecting to doesn't close your connection, the getlines function will wait, thinking more data is on the way. When your connection times out after those 2 minutes or so, getlines is seeing the connection close, and returning the lines it received to you. Recv on the other hand will grab everything up to the predetermined limit that is on the connection at that time and return it to the buffer you hand it immediately, but it will wait until it gets some data if there is none currently. I know you think its messy, but this might work out for you:
$httpSock->recv($buf, 1024);
$message = "";
while (length($buf) > 0) {
$message .= $buf;
$httpSock->recv($buf, 1024, MSG_DONTWAIT);
}
print $message;
The MSG_DONTWAIT will cause recv to not wait for a message if the connection is empty. You can also increase 1024 to some big number to decrease the number of loops, or even possibly even get the whole message at once.
This should also let you keep the sockets open for further use until you close it yourself.
I am wondering if the google example works because google.com is closing the connection after it responds.

How do I use raw sockets in Perl?

How can you get a raw socket in Perl, and then what's the best way to built a packet for use with it?
The same way you do in C... by setting the socket type when creating the socket.
In the example on CPAN use SOCK_RAW rather than SOCK_DGRAM (UDP) or SOCK_STREAM (TCP).
NOTE: creating raw sockets typically requires administrative privileges (i.e. root on UNIX). Windows OS's may have disabled ability to create raw sockets, you'll just have to test it and see.
Perhaps searching CPAN might help? IO::Socket comes to mind.
At first I was thinking that most previous answers were not responsive to the question.
After further thought, I think the author is probably not asking the right question.
If you're writing an application, you don't usually think of "building packets". you just open sockets, format up the data payload, and it's the protocol stack that builds packets with your data. OK, if you're using datagrams, you do need to define, generate and parse your payloads. But you typically let the kernel encapsulate it at the network level (e.g. add IP header) or link layer (e.g. add Ethernet framing). You usually don't use pcap. Sometimes just pack and unpack and maybe vec is enough.
If you're writing an unusual packet processor such as an active hostile attack tool, a man-in-the-middle process, or a traffic shaping device, then would be more likely to be "building packets" and using pcap. Maybe Net::Packet is for you also.
As austirg and others said, Socket will do this just fine:
use Socket;
socket my $socket, PF_INET, SOCK_RAW, 0 or die "Couldn't create raw socket: $!";
send $socket, $message, $flags, $to or die "Couldn't send packet: $!";
my $from = recv $socket, $message, $length, $flags or die "Couldn't receive from socket: $!";
Looks like Net::RawIP was what I was looking for:
use Net::RawIP;
$a = new Net::RawIP;
$a->set({ip => {saddr => 'my.target.lan',daddr => 'my.target.lan'},
tcp => {source => 139,dest => 139,psh => 1, syn => 1}});
$a->send;
$a->ethnew("eth0");
$a->ethset(source => 'my.target.lan',dest =>'my.target.lan');
$a->ethsend;
$p = $a->pcapinit("eth0","dst port 21",1500,30);
$f = dump_open($p,"/my/home/log");
loop $p,10,\&dump,$f;
The basic call to get a socket is... socket(). It comes standard with perl 5. perl 5 basically gives you the standard socket(), bind(), listen(), accept() calls that traditional UNIX does.
For a more object oriented model, check out IO::Socket.
Be aware that if you're trying to use raw sockets to send a pile of SYN packets, and you just "use Socket;" that's going to fill up your ARP tables and bomb out with "No buffer space available" and a stack of "CLOSE_WAIT" entries in "netstat" (which stops your machine doing any more connections of any kind until some of them free up).
Or in other words - you do really need Net::RawIP - it makes a difference.