Perl Net::SSLeay check if socket is available to read - perl

I am using a perl module called Net::APNS::Persistent. It helps me to open up a persistent connection with apple's apns server and send push notifications through APNS. This module uses Net::SSLeay for ssl communication with APNS server.
Now, I want to read from my socket periodically to check if APNS sends back any response. Net::APNS::Persistent already has a function called _read() which looks like below:
sub _read {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
my $data = Net::SSLeay::ssl_read_all( $ssl );
die_if_ssl_error("error reading from ssl connection: $!");
return $data;
}
However, this function works only after APNS drops the connection and I get error while trying to write. On other times my script gets stuck at,
my $data = Net::SSLeay::ssl_read_all( $ssl );
I checked Net::SSLeay doc and found it has a method called peek
Copies $max bytes from the specified $ssl into the returned value. In contrast to the Net::SSLeay::read() function, the data in the SSL buffer is unmodified after the SSL_peek() operation.
I though it might be useful, so I added another function within the Net::APNS::Persistent module:
sub ssl_peek {
my $self = shift;
my ($socket, $ctx, $ssl) = #{$self->_connection};
print "Peeking \n";
my $data = Net::SSLeay::peek( $ssl, $pending );
print "Done peeking \n";
return $data;
}
Unfortunately this also gave me the same problem. It only prints Peeking and never reaches the line where it would print Done peeking. Had same problem using Net::SSLeay::read. Is there a way to check if the socket can be read or maybe set a read timeout so that my script doesnt get stuck while trying to read from socket?

The APNS documentation says the following:
If you send a notification that is accepted by APNs, nothing is returned.
If you send a notification that is malformed or otherwise unintelligible, APNs returns an error-response packet and closes the connection. Any notifications that you sent after the malformed notification using the same connection are discarded, and must be resent
As long as your notifications as accepted, there won't be any data to read and thus a read operation on the socket will block. The only time there's data available is when there's an error, and then the connection is immediately closed. That should explain the behaviour you're observing.

To check if the underlying socket can be read use select, i.e.
IO::Select->new(fileno($socket))->can_read(timeout);
timeout can be 0 to just check and not wait, can be a number of seconds or can be undef to wait forever. But before you do the select check if data are still available in the SSL buffer:
if (Net::SSLeay::pending($ssl)) { ... use SSL_peek or SSL_read ... }
Apart from that it does look like that the module you use does not even attempt to validate the servers certificate :(

Related

can't detect closed connection with $sock->connected() [duplicate]

This question already has answers here:
How to detect when socket connection is lost?
(2 answers)
Closed 1 year ago.
I'm trying - and failing - to get a perl server to detect and get rid of connection with a client who broke the connection. Everywhere I looked, the suggested method is to use the socket's ->connected() method, but in my case it fails.
This is the server absolutely minimized:
#!/usr/bin/perl
use IO::Socket;
STDOUT->autoflush(1);
my $server = new IO::Socket::INET (
Listen => 7,
Reuse => 1,
LocalAddr => '192.168.0.29',
LocalPort => '11501',
Proto => 'tcp',
);
die "Could not create socket: $!\n" unless $server;
print "Waiting for clients\n";
while ($client = $server->accept()) {
print "Client connected\n";
do {
$client->recv($received,1024);
print $received;
select(undef, undef, undef, 0.1); # wait 0.1s before next read, not to spam the console if recv returns immediately
print ".";
} while( $client->connected() );
print "Client disconnected\n";
}
I connect to the server with Netcat, and everything works fine, the server receiving anything I send, but when I press ctrl-C to interrupt Netcat, 'recv' is no longer waiting, but $client->connected() still returns a true-like value, the main loop never returns to waiting to the next client.
(note - the above example has been absolutely minimized to show the problem, but in the complete program the socket is set to non-blocking, so I believe I can't trivially depend on recv returning an empty string. Unless I'm wrong?)
connected can't be used to reliably learn whether the peer has initiated a shutdown. It's mentioned almost word for word in the documentation:
Note that this method considers a half-open TCP socket to be "in a connected state". [...] Thus, in general, it cannot be used to reliably learn whether the peer has initiated a graceful shutdown because in most cases (see below) the local TCP state machine remains in CLOSE-WAIT until the local application calls "shutdown" in IO::Socket or close. Only at that point does this function return undef.
(Emphasis mine.)
If the other end disconnected, recv will return 0. So just check the value returned by recv.
while (1) {
my $rv = $client->recv(my $received, 64*1024);
die($!) if !defined($rv); # Error occurred when not defined.
last if $received eq ""; # EOF reached when zero.
print($received);
}
Additional bug fix: The above now calls recv before print.
Additional bug fix: Removed the useless sleep. recv will block if there's nothing received.
Performance fix: No reason to ask for just 1024 bytes. If there's any data available, it will be returned. So you might as well ask for more to cut down on the number of calls to recv and print.
Note that even with this solution, an ungraceful disconnection (power outage, network drop, etc) will go undetected. One could use a timeout or a heartbeat mechanism to solve that.

Why is IO::Socket::Socks unreliable when creating a proxy chain?

I want to use this module to create a connection using multiple SOCKS proxies, but the module I am using is very unreliable and slow compared to using an external solution such as proxychains. After successfully connecting through the chain of proxies, the while loop begins sending the data through the proxy chain. However, the program crashes with the error: "Out of memory". Does anyone have advice on how to fix the code? Any help would be appreciated.
while ($sock) {
$sock->syswrite (
"GET / HTTP/1.1\r\n"
);
}
When I look at the Wireshark capture, the initial request is sent without any problems but following requests have duplicate http requests in each packet. See: Wireshark capture
Edit: I tried using IO::Async, but the requests aren't sent.
my $loop = IO::Async::Loop->new;
my $handle = IO::Async::Handle->new(
write_handle => $sock,
on_write_ready => sub {
my $request = "GET / HTTP/1.1\r\n"
print $sock $request;
}
);
$loop->add( $handle );
$loop->loop_forever;
Edit: I was able to fix the script by establishing a new chain for each request I send. Is it possible to send a CONNECT request through the socks chain to establish a new connection each time I send a http request?

Perl send() UDP packet to multiple active clients

I am implementing a perl based UDP server/client model. Socket functions recv() and send() are used for server/client communication. It seems that send() takes the return address from the recv() call and I can only get the server to respond back to the client that sent the initial request. However, I'm looking for the server to send the data to all active clients instead of only the source client. If the peer_address and peer_port for each active client is known, how could I use perl send() to route packets to the specific clients?
Some attempts:
foreach (#active_clients) {
#$_->{peer_socket}->send($data);
#$socket->send($_->{peer_socket}, $data, 0, $_->{peer_addr});
#send($_->{peer_socket}, $data, 0, $_->{peer_addr});
$socket->send($data) #Echos back to source client
}
Perldoc send briefly describes parameter passing. I have attempted to adopt this and the result is either 1) The client simply receives the socket glob object or 2) the packet is never received or 3) an error is thrown. Any examples of how to use send() to route to known clients would be much appreciated.
Yes, just store the addresses some place and answer on them:
# receiving some packets
while (my $a = recv($sock, $buf, 8192, 0)) {
print "Received $buf\n";
push #addrs, $a;
# exit at some point
}
# send the answers
for my $a (#addrs) {
send($sock, "OK", 0, $a);
}

Sending Multiple Payloads Over Socket in Perl

Edit: the problem is with IIS, not with the Perl code I'm using. Someone else was talking about the same problem here: https://stackoverflow.com/a/491445/1179075
Long-time reader here, first time posting.
So I'm working on some existing code in Perl that does the following:
Create socket
Send some data
Close socket
Loop back to 1 until all data is sent
To avoid the overhead of creating and closing sockets all the time, I decided to do this:
Create socket
Send some data
Loop back to 2 until all data is sent
Close socket
The thing is, only the first payload is being sent - all subsequent ones are ignored. I'm sending this data to a .NET web service, and IIS isn't receiving the data at all. Somehow the socket is being closed, and I have no further clue why.
Here's the script I'm using to test my new changes:
use IO::Socket;
my $sock = new IO::Socket::INET(PeerAddr => $hostname, PeerPort => 80, Proto => "tcp", Timeout => "1000") || die "Failure: $! ";
while(1){
my $sent = $sock->send($basic_http_ping_message);
print "$sent\n";
sleep(1);
}
close($sock);
So this doesn't work - IIS only receives the very first ping. If I move $sock's assignment and closing into the loop, however, IIS correctly receives every single ping.
Am I just using sockets incorrectly here, or is there some arcane setting in IIS that I need to change?
Thanks!
I think your problem is buffering. Turn off buffering on the socket, or flush it after each write (closing the socket has the side-effect of flushing it).
What output are you getting? You have a print after the send(), if send() fails, it will return undef. You can print out the error like:
my $sent = $sock->send($msg);
die "Failed send: $!\n" unless defined $sent;
print "Sent $sent bytes\n";
My own guess is that the service that you're connecting to is closing the connection, which is why only one gets through, and also why creating a new connection each time would work.

IO::Socket timing out when getting response

I'm trying to connect to a web service using IO::Socket::INET (yes, I know that there are lots of better modules for doing this, but I don't have them and can't add them, so please don't suggest it), but I'm timing out (I think that's what it's doing) waiting for a response.
Here's the basic crux of my code (I previously populate the content with all the proper headers, and set it up, etc):
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line ( #lines ) {
print $line;
}
It appears that my request is made immediately, then it waits about 2 minutes before spitting back the response. If I alter the code to use a raw socket recv instead of getlines(), ala:
$httpSock->recv($data, 1024);
I get the response immediately (although only the first 1024 chars). Am I doing something wrong here? I'm using a late enough version of IO::Socket that autoflush should be enabled by default, but turning it on explicitly didn't seem to make any difference. I could probably also just keep reading from the socket until I got the entire response, but that's definitely messier than using getlines() or <$httpSock>.
Thanks in advance.
I'm having an issue re-creating the problem with the code snippet you've posted. Here's the code I tested with:
use strict;
use warnings;
use IO::Socket;
my $httpSock = new IO::Socket::INET(
PeerAddr => 'www.google.com',
PeerPort => '80',
Proto => 'tcp',
);
my $content = "HEAD / HTTP/1.0\r\nHost: www.google.com\r\n\r\n";
$httpSock->print($content);
my #lines = $httpSock->getlines();
foreach my $line (#lines) {
print $line;
}
Here are the results:
$ time ./1.pl
HTTP/1.0 200 OK
-snip-
real 0m0.084s
user 0m0.025s
sys 0m0.007s
The problem is that getlines() waits until the connection is closed. If the web service you are connecting to doesn't close your connection, the getlines function will wait, thinking more data is on the way. When your connection times out after those 2 minutes or so, getlines is seeing the connection close, and returning the lines it received to you. Recv on the other hand will grab everything up to the predetermined limit that is on the connection at that time and return it to the buffer you hand it immediately, but it will wait until it gets some data if there is none currently. I know you think its messy, but this might work out for you:
$httpSock->recv($buf, 1024);
$message = "";
while (length($buf) > 0) {
$message .= $buf;
$httpSock->recv($buf, 1024, MSG_DONTWAIT);
}
print $message;
The MSG_DONTWAIT will cause recv to not wait for a message if the connection is empty. You can also increase 1024 to some big number to decrease the number of loops, or even possibly even get the whole message at once.
This should also let you keep the sockets open for further use until you close it yourself.
I am wondering if the google example works because google.com is closing the connection after it responds.