Sending Multiple Payloads Over Socket in Perl - perl

Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!

I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).

What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.

Related

can't detect closed connection with $sock->connected() [duplicate]

This question already has answers here:
How to detect when socket connection is lost?
(2 answers)
Closed 1 year ago.
I'm trying - and failing - to get a perl server to detect and get rid of connection with a client who broke the connection. Everywhere I looked, the suggested method is to use the socket's ->connected() method, but in my case it fails.
This is the server absolutely minimized:
#!/usr/bin/perl
use IO::Socket;
STDOUT->autoflush(1);
my $server = new IO::Socket::INET (
Listen => 7,
Reuse => 1,
LocalAddr => '192.168.0.29',
LocalPort => '11501',
Proto => 'tcp',
);
die "Could not create socket: $!\n" unless $server;
print "Waiting for clients\n";
while ($client = $server->accept()) {
print "Client connected\n";
do {
$client->recv($received,1024);
print $received;
select(undef, undef, undef, 0.1); # wait 0.1s before next read, not to spam the console if recv returns immediately
print ".";
} while( $client->connected() );
print "Client disconnected\n";
}
I connect to the server with Netcat, and everything works fine, the server receiving anything I send, but when I press ctrl-C to interrupt Netcat, 'recv' is no longer waiting, but $client->connected() still returns a true-like value, the main loop never returns to waiting to the next client.
(note - the above example has been absolutely minimized to show the problem, but in the complete program the socket is set to non-blocking, so I believe I can't trivially depend on recv returning an empty string. Unless I'm wrong?)
connected can't be used to reliably learn whether the peer has initiated a shutdown. It's mentioned almost word for word in the documentation:
Note that this method considers a half-open TCP socket to be "in a connected state". [...] Thus, in general, it cannot be used to reliably learn whether the peer has initiated a graceful shutdown because in most cases (see below) the local TCP state machine remains in CLOSE-WAIT until the local application calls "shutdown" in IO::Socket or close. Only at that point does this function return undef.
(Emphasis mine.)
If the other end disconnected, recv will return 0. So just check the value returned by recv.
while (1) {
my $rv = $client->recv(my $received, 64*1024);
die($!) if !defined($rv); # Error occurred when not defined.
last if $received eq ""; # EOF reached when zero.
print($received);
}
Additional bug fix: The above now calls recv before print.
Additional bug fix: Removed the useless sleep. recv will block if there's nothing received.
Performance fix: No reason to ask for just 1024 bytes. If there's any data available, it will be returned. So you might as well ask for more to cut down on the number of calls to recv and print.
Note that even with this solution, an ungraceful disconnection (power outage, network drop, etc) will go undetected. One could use a timeout or a heartbeat mechanism to solve that.

Perl IO::Socket::INET + IO::Async::Stream reconnect to TCP server when disconnected

for the life of me, i can't seem to figure out how to get a standard TCP socket connection to reconnect after a disconnect, particularly in the context of an IO::Async::Loop
some basics:
#!/usr/bin/perl
use strict;
use warnings;
use Socket;
use IO::Async::Loop;
use IO::Async::Stream;
use IO::Socket;
use Time::HiRes qw(usleep);
# standard event loop
my $loop = IO::Async::Loop->new;
# notification service socket connection; we only write outgoing
my $NOTIFY = IO::Socket::INET->new(
PeerHost => $a_local_network_host,
PeerPort => $comm_port,
Proto => 'tcp',
Type => SOCK_STREAM,
ReuseAddr => 1,
Blocking => 0
) or warn("Can't connect to NOTIFY: $!\n");
setsockopt($NOTIFY, SOL_SOCKET, SO_KEEPALIVE, 1);
# main interface element via IO::Async
my $notifystream = IO::Async::Stream->new(
handle => $NOTIFY,
on_read => sub {
my ( $self, $buffref, $eof ) = #_;
# here's where we need to handle $eof if the remote goes away
if($eof) {
# i have tried several things here
usleep(200000); # give the remote service some milliseconds to start back up
# process fails if remote is not back up, so i know the timeout is 'good enough' for this test
# attempt to reconnect. have also tried recreating from scratch
$NOTIFY->connect("$a_local_network_host:$comm_port");
# this doesn't seem to have any effect
$self->configure(handle=>$NOTIFY);
}
}
);
$loop->add( $notifystream );
# kickstart the event loop
$loop->run;
### -- Meanwhile, elsewhere in the code -- ###
$notifystream->write("data for notification service\n");
in reality, there are many more things going on in the loop. i also have more sophisticated ways to test for socket closed or broken, further error handlers on the $notifystream, and a better timeout/backoff for reconnecting to the remote service, however this should show the main crux of what i'm doing.
when the remote server goes away for any reason, i'd like to attempt to reconnect to it without disrupting the rest of the system. in most cases the remote sends eof cleanly because it's intentionally rebooting (not my choice, just something i have to deal with), but i'd also like to handle other communication errors as well.
in practice, the above code acts as though it works, however the remote service no longer receives further write calls to the $notifystream. no errors are generated, the $notifystream happily takes further writes, but they are not delivered to the remote.
i have a feeling i'm doing this wrong. i'm not looking to rewrite the rest of the application's event loop, so please no 'just use AnyEvent'-type responses -- really hoping to gain a better understanding of how to reconnect/reuse/recreate the variables in use here (IO::Socket::INET and IO::Async::Stream) to compensate when a remote server is temporarily unavailable.
Any suggestions or references towards this goal are welcome. Thanks!
-=-=-=-=-
to summarize errors i have (and have not) received:
if i leave no usleep, the reconnect (or recreation) of the base socket will fail due to the remote service being unavailable.
if i attempt to recreate the socket from scratch and then 'configure' the stream, i get 'can't call method sysread on undefined' which leads me to believe the socket is not recreated correctly.
at no time do the stream's built in 'on_read_error' or 'on_write_error' handlers fire, regardless of how much i write to the socket with the current code, although if i destroy the socket entirely these will generate an error.
the socket simply seems to still be active after i know it has closed, and a reconnect does not seem to change anything. no errors are generated, but the socket is not being written to.
is there different syntax for reconnecting to an IO::Socket::INET socket? so far calls to ->connect() or rebuilding from scratch seem to be the only options for a closed connection, and neither seem to work.
You simply cannot connect an existing socket multiple times. You can only close the socket and create a new socket. This has nothing to do with IO::Socket::INET, IO::Async::Stream or even Perl but this is how the sockets API works.
In detail: The local socket actually never got disconnected, i.e. it is still configured to send data from a specific local IP address and port and to a specific address and port. Only, that the sending will no longer worked because the underlying TCP connection is broken or closed (i.e. FIN exchanged). Since there is no API to unbind and unconnect a socket the only way is to close it and create a new one which is unbound and unconnected until connect is called. This new socket then might or might not get the same file descriptor as the previous one.

Perl Net::SSLeay check if socket is available to read

I am using a perl module called Net::APNS::Persistent. It helps me to open up a persistent connection with apple's apns server and send push notifications through APNS. This module uses Net::SSLeay for ssl communication with APNS server.
Now, I want to read from my socket periodically to check if APNS sends back any response. Net::APNS::Persistent already has a function called _read() which looks like below:
sub _read {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
my $data = Net::SSLeay::ssl_read_all( $ssl );
die_if_ssl_error("error reading from ssl connection: $!");
return $data;
}
However, this function works only after APNS drops the connection and I get error while trying to write. On other times my script gets stuck at,
my $data = Net::SSLeay::ssl_read_all( $ssl );
I checked Net::SSLeay doc and found it has a method called peek
Copies $max bytes from the specified $ssl into the returned value. In contrast to the Net::SSLeay::read() function, the data in the SSL buffer is unmodified after the SSL_peek() operation.
I though it might be useful, so I added another function within the Net::APNS::Persistent module:
sub ssl_peek {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
print "Peeking \n";
my $data = Net::SSLeay::peek( $ssl, $pending );
print "Done peeking \n";
return $data;
}
Unfortunately this also gave me the same problem. It only prints Peeking and never reaches the line where it would print Done peeking. Had same problem using Net::SSLeay::read. Is there a way to check if the socket can be read or maybe set a read timeout so that my script doesnt get stuck while trying to read from socket?
The APNS documentation says the following:
If you send a notification that is accepted by APNs, nothing is returned.
If you send a notification that is malformed or otherwise unintelligible, APNs returns an error-response packet and closes the connection. Any notifications that you sent after the malformed notification using the same connection are discarded, and must be resent
As long as your notifications as accepted, there won't be any data to read and thus a read operation on the socket will block. The only time there's data available is when there's an error, and then the connection is immediately closed. That should explain the behaviour you're observing.
To check if the underlying socket can be read use select, i.e.
IO::Select->new(fileno($socket))->can_read(timeout);
timeout can be 0 to just check and not wait, can be a number of seconds or can be undef to wait forever. But before you do the select check if data are still available in the SSL buffer:
if (Net::SSLeay::pending($ssl)) { ... use SSL_peek or SSL_read ... }
Apart from that it does look like that the module you use does not even attempt to validate the servers certificate :(

Proper use of IO::Socket::INET for TCP client in perl

I have a question about how I should be using IO::Socket; I have a script that should run constantly, monitoring an Asterisk server for certain events. When these events happen, the script sends data from the event off to another server via a TCP socket. I've found that occasionally, the socket will close. My question is whether I should be using a single socket, and keep it open forever (and figure out why + prevent it from closing), or should I open and close a new socket for each bit of data sent out?
My experience with this sort of thing is very minimal, and I've read all the documentation without finding the answer I'm looking for. Below is a sample of what I've got so far:
#!/usr/bin/perl
use Asterisk::AMI;
use IO::Socket;
use strict;
use warnings;
my $sock = new IO::Socket::INET (
PeerAddr => '127.0.0.1',
PeerPort => '1234',
Proto => 'tcp',
);
sub newchannel {
my ($ami, $event) = #_;
if ($event->{'Context'} eq "from-trunk") {
my $unique_id = $event->{'Uniqueid'};
my $this_call = $call{$unique_id};
$this_call->{caller_name} = $event->{'CallerIDName'};
$this_call->{caller_number} = $event->{'CallerIDNum'};
$this_call->{dnis} = $event->{'Exten'};
$call{$unique_id} = $this_call;
};
}
sub ringcheck {
my ($ami, $event) = #_;
if ($event->{SubEvent} eq 'Begin') {
my $unique_id = $event->{UniqueID};
if (exists $call{$unique_id}) {
my $this_call = $call{$unique_id};
$this_call->{system_extension} = $event->{Dialstring};
$this_call->{dest_uniqueid} = $event->{DestUniqueID};
printf $sock "R|%s|%s|%s||%s\n",
$this_call->{caller_name},
$this_call->{caller_number},
$this_call->{system_extension},
$this_call->{dnis};
$this_call->{status} = "ringing";
}
}
There's a bit more to it than that, but this shows where I feel I should be starting/stopping a new socket (within the ringcheck sub).
Let me know if you need me to clarify or add anything.
Thanks!
Whether it is better establish a new connection for each message or to keep the connection open depends on a few factors:
Is the overhead associated with establishing connections significant? This depends on factors such as the frequency with which messages need to be sent, and the quality of the network connection.
If the remote end is 'localhost', as in your sample script above, then this is not likely to be an issue, and in fact in that case I would recommend using a Unix domain socket instead anyway.
Is the remote end sending anything back? Much harder to manage sporadic connections if either side may have asynchronous messages to send. Does not sound like it is the case for you though.
Are there any significant resources which you would be holding up by keeping the connection open?
Note that I don't consider random connection dropouts are a good reason to argue for making a new connection each time. If possible, better to diagnose that problem in any case. Otherwise, you might get unreliable performance no matter what approach you take.
In my experience a very common reason for seemingly random dropouts in long held TCP connections is intermediate tracking firewalls. Such firewalls will drop a connection if they don't see any activity on it for a period of time, to conserve their own resources. One way to combat this, which I use in some of my tools, is to set the socket option SO_KEEPALIVE on the socket, like this:
use Socket;
...
setsockopt($sock, SOL_SOCKET, SO_KEEPALIVE, 1);
This has a couple of benefits - it results in the Kernel sending keepalive messages on your connection at regular intervals, even if all is quiet, which by itself is enough to keep some firewalls happy. Also, if your connection does drop, your program can find out straight away instead of next time you want to write to it (although you may not notice it unless you are regularly checking for errors on your sockets).
Perhaps your best approach might be to set the SO_KEEPALIVE, and keep your socket open, but also check for errors whenever you try to write to it, and if you have an error, close and re-open the connection.
This question may also be of use to you.

IO::Socket timing out when getting response

I'm trying to connect to a web service using IO::Socket::INET (yes, I know that there are lots of better modules for doing this, but I don't have them and can't add them, so please don't suggest it), but I'm timing out (I think that's what it's doing) waiting for a response.
Here's the basic crux of my code (I previously populate the content with all the proper headers, and set it up, etc):
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line ( #lines ) {
print $line;
}
It appears that my request is made immediately, then it waits about 2 minutes before spitting back the response. If I alter the code to use a raw socket recv instead of getlines(), ala:
$httpSock->recv($data, 1024);
I get the response immediately (although only the first 1024 chars). Am I doing something wrong here? I'm using a late enough version of IO::Socket that autoflush should be enabled by default, but turning it on explicitly didn't seem to make any difference. I could probably also just keep reading from the socket until I got the entire response, but that's definitely messier than using getlines() or <$httpSock>.
Thanks in advance.
I'm having an issue re-creating the problem with the code snippet you've posted. Here's the code I tested with:
use strict;
use warnings;
use IO::Socket;
my $httpSock = new IO::Socket::INET(
PeerAddr => 'www.google.com',
PeerPort => '80',
Proto => 'tcp',
);
my $content = "HEAD / HTTP/1.0\r\nHost: www.google.com\r\n\r\n";
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line (#lines) {
print $line;
}
Here are the results:
$ time ./1.pl
HTTP/1.0 200 OK
-snip-
real 0m0.084s
user 0m0.025s
sys 0m0.007s
The problem is that getlines() waits until the connection is closed. If the web service you are connecting to doesn't close your connection, the getlines function will wait, thinking more data is on the way. When your connection times out after those 2 minutes or so, getlines is seeing the connection close, and returning the lines it received to you. Recv on the other hand will grab everything up to the predetermined limit that is on the connection at that time and return it to the buffer you hand it immediately, but it will wait until it gets some data if there is none currently. I know you think its messy, but this might work out for you:
$httpSock->recv($buf, 1024);
$message = "";
while (length($buf) > 0) {
$message .= $buf;
$httpSock->recv($buf, 1024, MSG_DONTWAIT);
}
print $message;
The MSG_DONTWAIT will cause recv to not wait for a message if the connection is empty. You can also increase 1024 to some big number to decrease the number of loops, or even possibly even get the whole message at once.
This should also let you keep the sockets open for further use until you close it yourself.
I am wondering if the google example works because google.com is closing the connection after it responds.